00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 110 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3288 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.047 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.064 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.109 Using shallow fetch with depth 1 00:00:00.109 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.109 > git --version # timeout=10 00:00:00.135 > git --version # 'git version 2.39.2' 00:00:00.135 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.153 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.153 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.253 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.266 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.277 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:04.277 > git config core.sparsecheckout # timeout=10 00:00:04.286 > git read-tree -mu HEAD # timeout=10 00:00:04.304 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:04.320 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:04.320 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:04.424 [Pipeline] Start of Pipeline 00:00:04.435 [Pipeline] library 00:00:04.436 Loading library shm_lib@master 00:00:04.436 Library shm_lib@master is cached. Copying from home. 00:00:04.450 [Pipeline] node 00:00:04.459 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.460 [Pipeline] { 00:00:04.469 [Pipeline] catchError 00:00:04.471 [Pipeline] { 00:00:04.485 [Pipeline] wrap 00:00:04.524 [Pipeline] { 00:00:04.541 [Pipeline] stage 00:00:04.543 [Pipeline] { (Prologue) 00:00:04.764 [Pipeline] sh 00:00:05.048 + logger -p user.info -t JENKINS-CI 00:00:05.066 [Pipeline] echo 00:00:05.068 Node: GP11 00:00:05.076 [Pipeline] sh 00:00:05.374 [Pipeline] setCustomBuildProperty 00:00:05.383 [Pipeline] echo 00:00:05.385 Cleanup processes 00:00:05.389 [Pipeline] sh 00:00:05.671 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.671 197889 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.683 [Pipeline] sh 00:00:05.960 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.960 ++ grep -v 'sudo pgrep' 00:00:05.960 ++ awk '{print $1}' 00:00:05.960 + sudo kill -9 00:00:05.960 + true 00:00:05.972 [Pipeline] cleanWs 00:00:05.981 [WS-CLEANUP] Deleting project workspace... 00:00:05.981 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.988 [WS-CLEANUP] done 00:00:05.991 [Pipeline] setCustomBuildProperty 00:00:06.004 [Pipeline] sh 00:00:06.285 + sudo git config --global --replace-all safe.directory '*' 00:00:06.378 [Pipeline] httpRequest 00:00:06.411 [Pipeline] echo 00:00:06.412 Sorcerer 10.211.164.101 is alive 00:00:06.419 [Pipeline] httpRequest 00:00:06.423 HttpMethod: GET 00:00:06.423 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.424 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.445 Response Code: HTTP/1.1 200 OK 00:00:06.445 Success: Status code 200 is in the accepted range: 200,404 00:00:06.446 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:09.199 [Pipeline] sh 00:00:09.486 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:09.503 [Pipeline] httpRequest 00:00:09.530 [Pipeline] echo 00:00:09.532 Sorcerer 10.211.164.101 is alive 00:00:09.541 [Pipeline] httpRequest 00:00:09.546 HttpMethod: GET 00:00:09.547 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.547 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.562 Response Code: HTTP/1.1 200 OK 00:00:09.562 Success: Status code 200 is in the accepted range: 200,404 00:00:09.563 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:16.258 [Pipeline] sh 00:01:16.543 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:19.845 [Pipeline] sh 00:01:20.132 + git -C spdk log --oneline -n5 00:01:20.132 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:20.132 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:20.132 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:20.132 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:20.132 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:20.157 [Pipeline] withCredentials 00:01:20.168 > git --version # timeout=10 00:01:20.182 > git --version # 'git version 2.39.2' 00:01:20.202 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:20.204 [Pipeline] { 00:01:20.214 [Pipeline] retry 00:01:20.216 [Pipeline] { 00:01:20.236 [Pipeline] sh 00:01:20.523 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:20.796 [Pipeline] } 00:01:20.818 [Pipeline] // retry 00:01:20.823 [Pipeline] } 00:01:20.843 [Pipeline] // withCredentials 00:01:20.854 [Pipeline] httpRequest 00:01:20.874 [Pipeline] echo 00:01:20.877 Sorcerer 10.211.164.101 is alive 00:01:20.886 [Pipeline] httpRequest 00:01:20.896 HttpMethod: GET 00:01:20.897 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:20.898 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:20.900 Response Code: HTTP/1.1 200 OK 00:01:20.901 Success: Status code 200 is in the accepted range: 200,404 00:01:20.901 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:28.220 [Pipeline] sh 00:01:28.503 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:30.419 [Pipeline] sh 00:01:30.705 + git -C dpdk log --oneline -n5 00:01:30.705 eeb0605f11 version: 23.11.0 00:01:30.705 238778122a doc: update release notes for 23.11 00:01:30.705 46aa6b3cfc doc: fix description of RSS features 00:01:30.705 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:30.705 7e421ae345 devtools: support skipping forbid rule check 00:01:30.716 [Pipeline] } 00:01:30.733 [Pipeline] // stage 00:01:30.743 [Pipeline] stage 00:01:30.744 [Pipeline] { (Prepare) 00:01:30.765 [Pipeline] writeFile 00:01:30.782 [Pipeline] sh 00:01:31.067 + logger -p user.info -t JENKINS-CI 00:01:31.079 [Pipeline] sh 00:01:31.364 + logger -p user.info -t JENKINS-CI 00:01:31.377 [Pipeline] sh 00:01:31.662 + cat autorun-spdk.conf 00:01:31.662 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.662 SPDK_TEST_NVMF=1 00:01:31.662 SPDK_TEST_NVME_CLI=1 00:01:31.662 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.662 SPDK_TEST_NVMF_NICS=e810 00:01:31.662 SPDK_TEST_VFIOUSER=1 00:01:31.662 SPDK_RUN_UBSAN=1 00:01:31.662 NET_TYPE=phy 00:01:31.662 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:31.662 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.670 RUN_NIGHTLY=1 00:01:31.674 [Pipeline] readFile 00:01:31.700 [Pipeline] withEnv 00:01:31.702 [Pipeline] { 00:01:31.715 [Pipeline] sh 00:01:32.001 + set -ex 00:01:32.001 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:32.001 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.001 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.001 ++ SPDK_TEST_NVMF=1 00:01:32.001 ++ SPDK_TEST_NVME_CLI=1 00:01:32.001 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.001 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.001 ++ SPDK_TEST_VFIOUSER=1 00:01:32.001 ++ SPDK_RUN_UBSAN=1 00:01:32.001 ++ NET_TYPE=phy 00:01:32.001 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.001 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.001 ++ RUN_NIGHTLY=1 00:01:32.001 + case $SPDK_TEST_NVMF_NICS in 00:01:32.001 + DRIVERS=ice 00:01:32.001 + [[ tcp == \r\d\m\a ]] 00:01:32.001 + [[ -n ice ]] 00:01:32.001 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:32.001 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:32.001 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:32.001 rmmod: ERROR: Module irdma is not currently loaded 00:01:32.001 rmmod: ERROR: Module i40iw is not currently loaded 00:01:32.001 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:32.001 + true 00:01:32.001 + for D in $DRIVERS 00:01:32.001 + sudo modprobe ice 00:01:32.001 + exit 0 00:01:32.012 [Pipeline] } 00:01:32.029 [Pipeline] // withEnv 00:01:32.034 [Pipeline] } 00:01:32.051 [Pipeline] // stage 00:01:32.061 [Pipeline] catchError 00:01:32.063 [Pipeline] { 00:01:32.077 [Pipeline] timeout 00:01:32.078 Timeout set to expire in 50 min 00:01:32.080 [Pipeline] { 00:01:32.095 [Pipeline] stage 00:01:32.097 [Pipeline] { (Tests) 00:01:32.112 [Pipeline] sh 00:01:32.398 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.398 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.398 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.398 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:32.398 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.398 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:32.398 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:32.398 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:32.398 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:32.398 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:32.398 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:32.398 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.398 + source /etc/os-release 00:01:32.398 ++ NAME='Fedora Linux' 00:01:32.398 ++ VERSION='38 (Cloud Edition)' 00:01:32.398 ++ ID=fedora 00:01:32.398 ++ VERSION_ID=38 00:01:32.398 ++ VERSION_CODENAME= 00:01:32.398 ++ PLATFORM_ID=platform:f38 00:01:32.398 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:32.398 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:32.398 ++ LOGO=fedora-logo-icon 00:01:32.398 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:32.398 ++ HOME_URL=https://fedoraproject.org/ 00:01:32.398 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:32.398 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:32.398 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:32.398 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:32.398 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:32.398 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:32.398 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:32.398 ++ SUPPORT_END=2024-05-14 00:01:32.398 ++ VARIANT='Cloud Edition' 00:01:32.398 ++ VARIANT_ID=cloud 00:01:32.398 + uname -a 00:01:32.398 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:32.398 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:33.336 Hugepages 00:01:33.336 node hugesize free / total 00:01:33.336 node0 1048576kB 0 / 0 00:01:33.336 node0 2048kB 0 / 0 00:01:33.336 node1 1048576kB 0 / 0 00:01:33.336 node1 2048kB 0 / 0 00:01:33.336 00:01:33.336 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:33.336 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:33.336 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:33.336 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:33.336 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:33.336 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:33.336 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:33.336 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:33.336 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:33.336 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:33.336 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:33.336 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:33.336 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:33.336 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:33.336 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:33.336 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:33.336 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:33.336 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:33.336 + rm -f /tmp/spdk-ld-path 00:01:33.336 + source autorun-spdk.conf 00:01:33.336 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.336 ++ SPDK_TEST_NVMF=1 00:01:33.336 ++ SPDK_TEST_NVME_CLI=1 00:01:33.336 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.336 ++ SPDK_TEST_NVMF_NICS=e810 00:01:33.336 ++ SPDK_TEST_VFIOUSER=1 00:01:33.336 ++ SPDK_RUN_UBSAN=1 00:01:33.336 ++ NET_TYPE=phy 00:01:33.336 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:33.336 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.336 ++ RUN_NIGHTLY=1 00:01:33.336 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:33.336 + [[ -n '' ]] 00:01:33.336 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.336 + for M in /var/spdk/build-*-manifest.txt 00:01:33.336 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:33.336 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:33.336 + for M in /var/spdk/build-*-manifest.txt 00:01:33.336 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:33.336 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:33.336 ++ uname 00:01:33.595 + [[ Linux == \L\i\n\u\x ]] 00:01:33.595 + sudo dmesg -T 00:01:33.595 + sudo dmesg --clear 00:01:33.595 + dmesg_pid=199231 00:01:33.595 + [[ Fedora Linux == FreeBSD ]] 00:01:33.595 + sudo dmesg -Tw 00:01:33.595 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.595 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.595 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:33.595 + [[ -x /usr/src/fio-static/fio ]] 00:01:33.595 + export FIO_BIN=/usr/src/fio-static/fio 00:01:33.595 + FIO_BIN=/usr/src/fio-static/fio 00:01:33.595 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:33.595 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:33.595 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:33.595 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.595 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.595 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:33.595 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.595 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.595 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:33.595 Test configuration: 00:01:33.595 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.596 SPDK_TEST_NVMF=1 00:01:33.596 SPDK_TEST_NVME_CLI=1 00:01:33.596 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.596 SPDK_TEST_NVMF_NICS=e810 00:01:33.596 SPDK_TEST_VFIOUSER=1 00:01:33.596 SPDK_RUN_UBSAN=1 00:01:33.596 NET_TYPE=phy 00:01:33.596 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:33.596 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.596 RUN_NIGHTLY=1 03:01:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:33.596 03:01:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:33.596 03:01:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:33.596 03:01:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:33.596 03:01:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.596 03:01:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.596 03:01:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.596 03:01:00 -- paths/export.sh@5 -- $ export PATH 00:01:33.596 03:01:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.596 03:01:00 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:33.596 03:01:00 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:33.596 03:01:00 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721696460.XXXXXX 00:01:33.596 03:01:00 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721696460.34jeDl 00:01:33.596 03:01:00 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:33.596 03:01:00 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:33.596 03:01:00 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.596 03:01:00 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:33.596 03:01:00 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:33.596 03:01:00 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:33.596 03:01:00 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:33.596 03:01:00 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:33.596 03:01:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.596 03:01:00 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:33.596 03:01:00 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:33.596 03:01:00 -- pm/common@17 -- $ local monitor 00:01:33.596 03:01:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.596 03:01:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.596 03:01:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.596 03:01:00 -- pm/common@21 -- $ date +%s 00:01:33.596 03:01:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.596 03:01:00 -- pm/common@21 -- $ date +%s 00:01:33.596 03:01:00 -- pm/common@25 -- $ sleep 1 00:01:33.596 03:01:00 -- pm/common@21 -- $ date +%s 00:01:33.596 03:01:00 -- pm/common@21 -- $ date +%s 00:01:33.596 03:01:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721696460 00:01:33.596 03:01:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721696460 00:01:33.596 03:01:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721696460 00:01:33.596 03:01:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721696460 00:01:33.596 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721696460_collect-vmstat.pm.log 00:01:33.596 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721696460_collect-cpu-load.pm.log 00:01:33.596 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721696460_collect-cpu-temp.pm.log 00:01:33.596 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721696460_collect-bmc-pm.bmc.pm.log 00:01:34.535 03:01:01 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:34.535 03:01:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:34.535 03:01:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:34.535 03:01:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.535 03:01:01 -- spdk/autobuild.sh@16 -- $ date -u 00:01:34.535 Tue Jul 23 01:01:01 AM UTC 2024 00:01:34.535 03:01:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:34.535 v24.05-13-g5fa2f5086 00:01:34.535 03:01:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:34.535 03:01:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:34.535 03:01:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:34.535 03:01:01 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:34.535 03:01:01 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:34.535 03:01:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.535 ************************************ 00:01:34.535 START TEST ubsan 00:01:34.535 ************************************ 00:01:34.535 03:01:01 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:34.535 using ubsan 00:01:34.535 00:01:34.535 real 0m0.000s 00:01:34.535 user 0m0.000s 00:01:34.535 sys 0m0.000s 00:01:34.535 03:01:01 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:34.535 03:01:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:34.535 ************************************ 00:01:34.535 END TEST ubsan 00:01:34.535 ************************************ 00:01:34.535 03:01:01 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:34.535 03:01:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:34.535 03:01:01 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:34.535 03:01:01 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:34.535 03:01:01 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:34.535 03:01:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.794 ************************************ 00:01:34.794 START TEST build_native_dpdk 00:01:34.794 ************************************ 00:01:34.794 03:01:01 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:34.794 03:01:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:34.795 eeb0605f11 version: 23.11.0 00:01:34.795 238778122a doc: update release notes for 23.11 00:01:34.795 46aa6b3cfc doc: fix description of RSS features 00:01:34.795 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:34.795 7e421ae345 devtools: support skipping forbid rule check 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:34.795 03:01:01 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:34.795 patching file config/rte_config.h 00:01:34.795 Hunk #1 succeeded at 60 (offset 1 line). 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:34.795 03:01:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:38.981 The Meson build system 00:01:38.981 Version: 1.3.1 00:01:38.981 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:38.981 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:38.981 Build type: native build 00:01:38.981 Program cat found: YES (/usr/bin/cat) 00:01:38.981 Project name: DPDK 00:01:38.981 Project version: 23.11.0 00:01:38.981 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:38.981 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:38.981 Host machine cpu family: x86_64 00:01:38.981 Host machine cpu: x86_64 00:01:38.981 Message: ## Building in Developer Mode ## 00:01:38.981 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:38.981 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:38.981 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:38.981 Program python3 found: YES (/usr/bin/python3) 00:01:38.981 Program cat found: YES (/usr/bin/cat) 00:01:38.981 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:38.981 Compiler for C supports arguments -march=native: YES 00:01:38.981 Checking for size of "void *" : 8 00:01:38.981 Checking for size of "void *" : 8 (cached) 00:01:38.981 Library m found: YES 00:01:38.981 Library numa found: YES 00:01:38.981 Has header "numaif.h" : YES 00:01:38.981 Library fdt found: NO 00:01:38.981 Library execinfo found: NO 00:01:38.981 Has header "execinfo.h" : YES 00:01:38.981 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:38.981 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:38.981 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:38.981 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:38.981 Run-time dependency openssl found: YES 3.0.9 00:01:38.981 Run-time dependency libpcap found: YES 1.10.4 00:01:38.981 Has header "pcap.h" with dependency libpcap: YES 00:01:38.981 Compiler for C supports arguments -Wcast-qual: YES 00:01:38.981 Compiler for C supports arguments -Wdeprecated: YES 00:01:38.981 Compiler for C supports arguments -Wformat: YES 00:01:38.981 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:38.981 Compiler for C supports arguments -Wformat-security: NO 00:01:38.981 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.981 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:38.981 Compiler for C supports arguments -Wnested-externs: YES 00:01:38.981 Compiler for C supports arguments -Wold-style-definition: YES 00:01:38.981 Compiler for C supports arguments -Wpointer-arith: YES 00:01:38.981 Compiler for C supports arguments -Wsign-compare: YES 00:01:38.981 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:38.981 Compiler for C supports arguments -Wundef: YES 00:01:38.981 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.981 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:38.981 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:38.981 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.981 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:38.981 Program objdump found: YES (/usr/bin/objdump) 00:01:38.981 Compiler for C supports arguments -mavx512f: YES 00:01:38.981 Checking if "AVX512 checking" compiles: YES 00:01:38.981 Fetching value of define "__SSE4_2__" : 1 00:01:38.981 Fetching value of define "__AES__" : 1 00:01:38.981 Fetching value of define "__AVX__" : 1 00:01:38.981 Fetching value of define "__AVX2__" : (undefined) 00:01:38.981 Fetching value of define "__AVX512BW__" : (undefined) 00:01:38.981 Fetching value of define "__AVX512CD__" : (undefined) 00:01:38.981 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:38.981 Fetching value of define "__AVX512F__" : (undefined) 00:01:38.981 Fetching value of define "__AVX512VL__" : (undefined) 00:01:38.981 Fetching value of define "__PCLMUL__" : 1 00:01:38.981 Fetching value of define "__RDRND__" : 1 00:01:38.981 Fetching value of define "__RDSEED__" : (undefined) 00:01:38.981 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:38.981 Fetching value of define "__znver1__" : (undefined) 00:01:38.981 Fetching value of define "__znver2__" : (undefined) 00:01:38.981 Fetching value of define "__znver3__" : (undefined) 00:01:38.981 Fetching value of define "__znver4__" : (undefined) 00:01:38.981 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:38.981 Message: lib/log: Defining dependency "log" 00:01:38.981 Message: lib/kvargs: Defining dependency "kvargs" 00:01:38.981 Message: lib/telemetry: Defining dependency "telemetry" 00:01:38.981 Checking for function "getentropy" : NO 00:01:38.981 Message: lib/eal: Defining dependency "eal" 00:01:38.981 Message: lib/ring: Defining dependency "ring" 00:01:38.981 Message: lib/rcu: Defining dependency "rcu" 00:01:38.981 Message: lib/mempool: Defining dependency "mempool" 00:01:38.981 Message: lib/mbuf: Defining dependency "mbuf" 00:01:38.981 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:38.981 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:38.981 Compiler for C supports arguments -mpclmul: YES 00:01:38.981 Compiler for C supports arguments -maes: YES 00:01:38.981 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:38.981 Compiler for C supports arguments -mavx512bw: YES 00:01:38.981 Compiler for C supports arguments -mavx512dq: YES 00:01:38.981 Compiler for C supports arguments -mavx512vl: YES 00:01:38.981 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:38.981 Compiler for C supports arguments -mavx2: YES 00:01:38.981 Compiler for C supports arguments -mavx: YES 00:01:38.981 Message: lib/net: Defining dependency "net" 00:01:38.981 Message: lib/meter: Defining dependency "meter" 00:01:38.981 Message: lib/ethdev: Defining dependency "ethdev" 00:01:38.981 Message: lib/pci: Defining dependency "pci" 00:01:38.981 Message: lib/cmdline: Defining dependency "cmdline" 00:01:38.981 Message: lib/metrics: Defining dependency "metrics" 00:01:38.981 Message: lib/hash: Defining dependency "hash" 00:01:38.981 Message: lib/timer: Defining dependency "timer" 00:01:38.981 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:38.981 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:38.981 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:38.981 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:38.981 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:38.981 Message: lib/acl: Defining dependency "acl" 00:01:38.981 Message: lib/bbdev: Defining dependency "bbdev" 00:01:38.981 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:38.981 Run-time dependency libelf found: YES 0.190 00:01:38.981 Message: lib/bpf: Defining dependency "bpf" 00:01:38.981 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:38.981 Message: lib/compressdev: Defining dependency "compressdev" 00:01:38.981 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:38.981 Message: lib/distributor: Defining dependency "distributor" 00:01:38.981 Message: lib/dmadev: Defining dependency "dmadev" 00:01:38.981 Message: lib/efd: Defining dependency "efd" 00:01:38.981 Message: lib/eventdev: Defining dependency "eventdev" 00:01:38.981 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:38.981 Message: lib/gpudev: Defining dependency "gpudev" 00:01:38.981 Message: lib/gro: Defining dependency "gro" 00:01:38.981 Message: lib/gso: Defining dependency "gso" 00:01:38.981 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:38.981 Message: lib/jobstats: Defining dependency "jobstats" 00:01:38.981 Message: lib/latencystats: Defining dependency "latencystats" 00:01:38.981 Message: lib/lpm: Defining dependency "lpm" 00:01:38.981 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:38.981 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:38.981 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:38.981 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:38.981 Message: lib/member: Defining dependency "member" 00:01:38.981 Message: lib/pcapng: Defining dependency "pcapng" 00:01:38.981 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:38.981 Message: lib/power: Defining dependency "power" 00:01:38.981 Message: lib/rawdev: Defining dependency "rawdev" 00:01:38.981 Message: lib/regexdev: Defining dependency "regexdev" 00:01:38.981 Message: lib/mldev: Defining dependency "mldev" 00:01:38.981 Message: lib/rib: Defining dependency "rib" 00:01:38.981 Message: lib/reorder: Defining dependency "reorder" 00:01:38.981 Message: lib/sched: Defining dependency "sched" 00:01:38.981 Message: lib/security: Defining dependency "security" 00:01:38.981 Message: lib/stack: Defining dependency "stack" 00:01:38.981 Has header "linux/userfaultfd.h" : YES 00:01:38.981 Has header "linux/vduse.h" : YES 00:01:38.981 Message: lib/vhost: Defining dependency "vhost" 00:01:38.981 Message: lib/ipsec: Defining dependency "ipsec" 00:01:38.981 Message: lib/pdcp: Defining dependency "pdcp" 00:01:38.981 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:38.981 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:38.981 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:38.981 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:38.981 Message: lib/fib: Defining dependency "fib" 00:01:38.981 Message: lib/port: Defining dependency "port" 00:01:38.981 Message: lib/pdump: Defining dependency "pdump" 00:01:38.981 Message: lib/table: Defining dependency "table" 00:01:38.981 Message: lib/pipeline: Defining dependency "pipeline" 00:01:38.981 Message: lib/graph: Defining dependency "graph" 00:01:38.982 Message: lib/node: Defining dependency "node" 00:01:40.362 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:40.362 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:40.362 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.362 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.362 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:40.362 Compiler for C supports arguments -Wno-unused-value: YES 00:01:40.362 Compiler for C supports arguments -Wno-format: YES 00:01:40.362 Compiler for C supports arguments -Wno-format-security: YES 00:01:40.362 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:40.362 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:40.362 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:40.362 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:40.362 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.362 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.362 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:40.362 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:40.362 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:40.362 Has header "sys/epoll.h" : YES 00:01:40.362 Program doxygen found: YES (/usr/bin/doxygen) 00:01:40.362 Configuring doxy-api-html.conf using configuration 00:01:40.362 Configuring doxy-api-man.conf using configuration 00:01:40.362 Program mandb found: YES (/usr/bin/mandb) 00:01:40.362 Program sphinx-build found: NO 00:01:40.362 Configuring rte_build_config.h using configuration 00:01:40.362 Message: 00:01:40.362 ================= 00:01:40.362 Applications Enabled 00:01:40.362 ================= 00:01:40.362 00:01:40.362 apps: 00:01:40.362 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:40.362 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:40.362 test-pmd, test-regex, test-sad, test-security-perf, 00:01:40.362 00:01:40.362 Message: 00:01:40.362 ================= 00:01:40.362 Libraries Enabled 00:01:40.362 ================= 00:01:40.362 00:01:40.362 libs: 00:01:40.362 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:40.362 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:40.362 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:40.362 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:40.362 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:40.362 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:40.362 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:40.362 00:01:40.362 00:01:40.362 Message: 00:01:40.362 =============== 00:01:40.362 Drivers Enabled 00:01:40.362 =============== 00:01:40.362 00:01:40.362 common: 00:01:40.362 00:01:40.362 bus: 00:01:40.362 pci, vdev, 00:01:40.362 mempool: 00:01:40.362 ring, 00:01:40.362 dma: 00:01:40.362 00:01:40.362 net: 00:01:40.362 i40e, 00:01:40.362 raw: 00:01:40.362 00:01:40.362 crypto: 00:01:40.362 00:01:40.362 compress: 00:01:40.362 00:01:40.362 regex: 00:01:40.362 00:01:40.362 ml: 00:01:40.362 00:01:40.362 vdpa: 00:01:40.362 00:01:40.362 event: 00:01:40.362 00:01:40.362 baseband: 00:01:40.362 00:01:40.362 gpu: 00:01:40.362 00:01:40.362 00:01:40.362 Message: 00:01:40.362 ================= 00:01:40.362 Content Skipped 00:01:40.362 ================= 00:01:40.362 00:01:40.362 apps: 00:01:40.362 00:01:40.362 libs: 00:01:40.362 00:01:40.362 drivers: 00:01:40.362 common/cpt: not in enabled drivers build config 00:01:40.362 common/dpaax: not in enabled drivers build config 00:01:40.362 common/iavf: not in enabled drivers build config 00:01:40.362 common/idpf: not in enabled drivers build config 00:01:40.363 common/mvep: not in enabled drivers build config 00:01:40.363 common/octeontx: not in enabled drivers build config 00:01:40.363 bus/auxiliary: not in enabled drivers build config 00:01:40.363 bus/cdx: not in enabled drivers build config 00:01:40.363 bus/dpaa: not in enabled drivers build config 00:01:40.363 bus/fslmc: not in enabled drivers build config 00:01:40.363 bus/ifpga: not in enabled drivers build config 00:01:40.363 bus/platform: not in enabled drivers build config 00:01:40.363 bus/vmbus: not in enabled drivers build config 00:01:40.363 common/cnxk: not in enabled drivers build config 00:01:40.363 common/mlx5: not in enabled drivers build config 00:01:40.363 common/nfp: not in enabled drivers build config 00:01:40.363 common/qat: not in enabled drivers build config 00:01:40.363 common/sfc_efx: not in enabled drivers build config 00:01:40.363 mempool/bucket: not in enabled drivers build config 00:01:40.363 mempool/cnxk: not in enabled drivers build config 00:01:40.363 mempool/dpaa: not in enabled drivers build config 00:01:40.363 mempool/dpaa2: not in enabled drivers build config 00:01:40.363 mempool/octeontx: not in enabled drivers build config 00:01:40.363 mempool/stack: not in enabled drivers build config 00:01:40.363 dma/cnxk: not in enabled drivers build config 00:01:40.363 dma/dpaa: not in enabled drivers build config 00:01:40.363 dma/dpaa2: not in enabled drivers build config 00:01:40.363 dma/hisilicon: not in enabled drivers build config 00:01:40.363 dma/idxd: not in enabled drivers build config 00:01:40.363 dma/ioat: not in enabled drivers build config 00:01:40.363 dma/skeleton: not in enabled drivers build config 00:01:40.363 net/af_packet: not in enabled drivers build config 00:01:40.363 net/af_xdp: not in enabled drivers build config 00:01:40.363 net/ark: not in enabled drivers build config 00:01:40.363 net/atlantic: not in enabled drivers build config 00:01:40.363 net/avp: not in enabled drivers build config 00:01:40.363 net/axgbe: not in enabled drivers build config 00:01:40.363 net/bnx2x: not in enabled drivers build config 00:01:40.363 net/bnxt: not in enabled drivers build config 00:01:40.363 net/bonding: not in enabled drivers build config 00:01:40.363 net/cnxk: not in enabled drivers build config 00:01:40.363 net/cpfl: not in enabled drivers build config 00:01:40.363 net/cxgbe: not in enabled drivers build config 00:01:40.363 net/dpaa: not in enabled drivers build config 00:01:40.363 net/dpaa2: not in enabled drivers build config 00:01:40.363 net/e1000: not in enabled drivers build config 00:01:40.363 net/ena: not in enabled drivers build config 00:01:40.363 net/enetc: not in enabled drivers build config 00:01:40.363 net/enetfec: not in enabled drivers build config 00:01:40.363 net/enic: not in enabled drivers build config 00:01:40.363 net/failsafe: not in enabled drivers build config 00:01:40.363 net/fm10k: not in enabled drivers build config 00:01:40.363 net/gve: not in enabled drivers build config 00:01:40.363 net/hinic: not in enabled drivers build config 00:01:40.363 net/hns3: not in enabled drivers build config 00:01:40.363 net/iavf: not in enabled drivers build config 00:01:40.363 net/ice: not in enabled drivers build config 00:01:40.363 net/idpf: not in enabled drivers build config 00:01:40.363 net/igc: not in enabled drivers build config 00:01:40.363 net/ionic: not in enabled drivers build config 00:01:40.363 net/ipn3ke: not in enabled drivers build config 00:01:40.363 net/ixgbe: not in enabled drivers build config 00:01:40.363 net/mana: not in enabled drivers build config 00:01:40.363 net/memif: not in enabled drivers build config 00:01:40.363 net/mlx4: not in enabled drivers build config 00:01:40.363 net/mlx5: not in enabled drivers build config 00:01:40.363 net/mvneta: not in enabled drivers build config 00:01:40.363 net/mvpp2: not in enabled drivers build config 00:01:40.363 net/netvsc: not in enabled drivers build config 00:01:40.363 net/nfb: not in enabled drivers build config 00:01:40.363 net/nfp: not in enabled drivers build config 00:01:40.363 net/ngbe: not in enabled drivers build config 00:01:40.363 net/null: not in enabled drivers build config 00:01:40.363 net/octeontx: not in enabled drivers build config 00:01:40.363 net/octeon_ep: not in enabled drivers build config 00:01:40.363 net/pcap: not in enabled drivers build config 00:01:40.363 net/pfe: not in enabled drivers build config 00:01:40.363 net/qede: not in enabled drivers build config 00:01:40.363 net/ring: not in enabled drivers build config 00:01:40.363 net/sfc: not in enabled drivers build config 00:01:40.363 net/softnic: not in enabled drivers build config 00:01:40.363 net/tap: not in enabled drivers build config 00:01:40.363 net/thunderx: not in enabled drivers build config 00:01:40.363 net/txgbe: not in enabled drivers build config 00:01:40.363 net/vdev_netvsc: not in enabled drivers build config 00:01:40.363 net/vhost: not in enabled drivers build config 00:01:40.363 net/virtio: not in enabled drivers build config 00:01:40.363 net/vmxnet3: not in enabled drivers build config 00:01:40.363 raw/cnxk_bphy: not in enabled drivers build config 00:01:40.363 raw/cnxk_gpio: not in enabled drivers build config 00:01:40.363 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:40.363 raw/ifpga: not in enabled drivers build config 00:01:40.363 raw/ntb: not in enabled drivers build config 00:01:40.363 raw/skeleton: not in enabled drivers build config 00:01:40.363 crypto/armv8: not in enabled drivers build config 00:01:40.363 crypto/bcmfs: not in enabled drivers build config 00:01:40.363 crypto/caam_jr: not in enabled drivers build config 00:01:40.363 crypto/ccp: not in enabled drivers build config 00:01:40.363 crypto/cnxk: not in enabled drivers build config 00:01:40.363 crypto/dpaa_sec: not in enabled drivers build config 00:01:40.363 crypto/dpaa2_sec: not in enabled drivers build config 00:01:40.363 crypto/ipsec_mb: not in enabled drivers build config 00:01:40.363 crypto/mlx5: not in enabled drivers build config 00:01:40.363 crypto/mvsam: not in enabled drivers build config 00:01:40.363 crypto/nitrox: not in enabled drivers build config 00:01:40.363 crypto/null: not in enabled drivers build config 00:01:40.363 crypto/octeontx: not in enabled drivers build config 00:01:40.363 crypto/openssl: not in enabled drivers build config 00:01:40.363 crypto/scheduler: not in enabled drivers build config 00:01:40.363 crypto/uadk: not in enabled drivers build config 00:01:40.363 crypto/virtio: not in enabled drivers build config 00:01:40.363 compress/isal: not in enabled drivers build config 00:01:40.363 compress/mlx5: not in enabled drivers build config 00:01:40.363 compress/octeontx: not in enabled drivers build config 00:01:40.363 compress/zlib: not in enabled drivers build config 00:01:40.363 regex/mlx5: not in enabled drivers build config 00:01:40.363 regex/cn9k: not in enabled drivers build config 00:01:40.363 ml/cnxk: not in enabled drivers build config 00:01:40.363 vdpa/ifc: not in enabled drivers build config 00:01:40.363 vdpa/mlx5: not in enabled drivers build config 00:01:40.363 vdpa/nfp: not in enabled drivers build config 00:01:40.363 vdpa/sfc: not in enabled drivers build config 00:01:40.363 event/cnxk: not in enabled drivers build config 00:01:40.363 event/dlb2: not in enabled drivers build config 00:01:40.363 event/dpaa: not in enabled drivers build config 00:01:40.363 event/dpaa2: not in enabled drivers build config 00:01:40.363 event/dsw: not in enabled drivers build config 00:01:40.363 event/opdl: not in enabled drivers build config 00:01:40.363 event/skeleton: not in enabled drivers build config 00:01:40.363 event/sw: not in enabled drivers build config 00:01:40.363 event/octeontx: not in enabled drivers build config 00:01:40.363 baseband/acc: not in enabled drivers build config 00:01:40.363 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:40.363 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:40.363 baseband/la12xx: not in enabled drivers build config 00:01:40.363 baseband/null: not in enabled drivers build config 00:01:40.363 baseband/turbo_sw: not in enabled drivers build config 00:01:40.363 gpu/cuda: not in enabled drivers build config 00:01:40.363 00:01:40.363 00:01:40.363 Build targets in project: 220 00:01:40.363 00:01:40.363 DPDK 23.11.0 00:01:40.363 00:01:40.363 User defined options 00:01:40.363 libdir : lib 00:01:40.363 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.363 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:40.363 c_link_args : 00:01:40.363 enable_docs : false 00:01:40.363 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:40.363 enable_kmods : false 00:01:40.363 machine : native 00:01:40.363 tests : false 00:01:40.363 00:01:40.363 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.363 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:40.363 03:01:06 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:40.363 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:40.363 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.363 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.363 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.363 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.363 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.363 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.624 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.624 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.624 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.624 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.624 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:40.624 [12/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.624 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.624 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:40.624 [15/710] Linking static target lib/librte_kvargs.a 00:01:40.624 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.624 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:40.887 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:40.887 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.887 [20/710] Linking static target lib/librte_log.a 00:01:40.887 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.887 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.458 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.458 [24/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:41.458 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:41.458 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:41.458 [27/710] Linking target lib/librte_log.so.24.0 00:01:41.459 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:41.459 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.459 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:41.459 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:41.459 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.459 [33/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:41.459 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:41.721 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.721 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.721 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.721 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.721 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:41.721 [40/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:41.721 [41/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:41.721 [42/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:41.721 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:41.721 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.721 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.721 [46/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:41.721 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.721 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:41.721 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.721 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.721 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.721 [52/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:41.721 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.721 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.721 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.721 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.721 [57/710] Linking target lib/librte_kvargs.so.24.0 00:01:41.721 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:41.721 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.985 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:41.985 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:41.985 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.985 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:41.985 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:41.985 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:42.247 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:42.247 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:42.247 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:42.247 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:42.247 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:42.247 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:42.247 [72/710] Linking static target lib/librte_pci.a 00:01:42.509 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:42.509 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:42.509 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:42.509 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:42.509 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:42.509 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:42.509 [79/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.509 [80/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:42.768 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:42.768 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.768 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:42.768 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:42.768 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:42.768 [86/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:42.768 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:42.768 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:42.768 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:42.768 [90/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:42.768 [91/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:42.768 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:42.768 [93/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:42.768 [94/710] Linking static target lib/librte_ring.a 00:01:42.768 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:42.768 [96/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:43.033 [97/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.033 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:43.033 [99/710] Linking static target lib/librte_meter.a 00:01:43.033 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:43.033 [101/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:43.033 [102/710] Linking static target lib/librte_telemetry.a 00:01:43.033 [103/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:43.033 [104/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:43.033 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:43.033 [106/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:43.033 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:43.033 [108/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:43.033 [109/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:43.033 [110/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:43.293 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:43.293 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:43.293 [113/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:43.293 [114/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.293 [115/710] Linking static target lib/librte_eal.a 00:01:43.293 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:43.293 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:43.293 [118/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.293 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:43.555 [120/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:43.555 [121/710] Linking static target lib/librte_net.a 00:01:43.555 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:43.555 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:43.555 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:43.555 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:43.555 [126/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.819 [127/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:43.819 [128/710] Linking static target lib/librte_mempool.a 00:01:43.819 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:43.819 [130/710] Linking target lib/librte_telemetry.so.24.0 00:01:43.819 [131/710] Linking static target lib/librte_cmdline.a 00:01:43.819 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:43.819 [133/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.819 [134/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:43.819 [135/710] Linking static target lib/librte_cfgfile.a 00:01:43.819 [136/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:43.819 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:43.819 [138/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:44.081 [139/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:44.081 [140/710] Linking static target lib/librte_metrics.a 00:01:44.081 [141/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:44.081 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:44.081 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.081 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:44.081 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:44.346 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:44.346 [147/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:44.346 [148/710] Linking static target lib/librte_bitratestats.a 00:01:44.346 [149/710] Linking static target lib/librte_rcu.a 00:01:44.346 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:44.346 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:44.346 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:44.346 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.346 [154/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:44.606 [155/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:44.606 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:44.606 [157/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:44.606 [158/710] Linking static target lib/librte_timer.a 00:01:44.606 [159/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:44.606 [160/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:44.606 [161/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.606 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.606 [163/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.606 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.868 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:44.868 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:44.868 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:44.868 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:44.868 [169/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:44.868 [170/710] Linking static target lib/librte_bbdev.a 00:01:45.131 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.131 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:45.131 [173/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:45.131 [174/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.131 [175/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:45.131 [176/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:45.131 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:45.131 [178/710] Linking static target lib/librte_compressdev.a 00:01:45.395 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:45.395 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:45.395 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:45.654 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:45.654 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:45.654 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:45.654 [185/710] Linking static target lib/librte_distributor.a 00:01:45.654 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:45.916 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.916 [188/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:45.916 [189/710] Linking static target lib/librte_dmadev.a 00:01:45.916 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:45.916 [191/710] Linking static target lib/librte_bpf.a 00:01:45.916 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:45.916 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:45.916 [194/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.181 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:46.181 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:46.181 [197/710] Linking static target lib/librte_dispatcher.a 00:01:46.181 [198/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.181 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:46.181 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:46.182 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:46.182 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:46.182 [203/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:46.182 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:46.182 [205/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:46.182 [206/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:46.182 [207/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:46.441 [208/710] Linking static target lib/librte_gpudev.a 00:01:46.441 [209/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:46.441 [210/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:46.441 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.441 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:46.441 [213/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:46.441 [214/710] Linking static target lib/librte_gro.a 00:01:46.441 [215/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:46.441 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.441 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:46.703 [218/710] Linking static target lib/librte_jobstats.a 00:01:46.703 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:46.703 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:46.703 [221/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.703 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.703 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:46.967 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:46.967 [225/710] Linking static target lib/librte_latencystats.a 00:01:46.967 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:46.967 [227/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.967 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:47.226 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:47.226 [230/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:47.226 [231/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:47.226 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:47.226 [233/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:47.226 [234/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:47.226 [235/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.226 [236/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:47.226 [237/710] Linking static target lib/librte_ip_frag.a 00:01:47.489 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:47.489 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:47.489 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:47.489 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.489 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:47.753 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:47.753 [244/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.753 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:47.753 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.753 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:48.014 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:48.014 [249/710] Linking static target lib/librte_gso.a 00:01:48.014 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:48.014 [251/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.014 [252/710] Linking static target lib/librte_regexdev.a 00:01:48.014 [253/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:48.014 [254/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:48.014 [255/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:48.014 [256/710] Linking static target lib/librte_rawdev.a 00:01:48.014 [257/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:48.014 [258/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:48.277 [259/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:48.277 [260/710] Linking static target lib/librte_efd.a 00:01:48.277 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:48.277 [262/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:48.277 [263/710] Linking static target lib/librte_pcapng.a 00:01:48.277 [264/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.277 [265/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:48.277 [266/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:48.277 [267/710] Linking static target lib/librte_mldev.a 00:01:48.277 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:48.547 [269/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:48.547 [270/710] Linking static target lib/acl/libavx2_tmp.a 00:01:48.547 [271/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:48.547 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:48.547 [273/710] Linking static target lib/librte_lpm.a 00:01:48.547 [274/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:48.547 [275/710] Linking static target lib/librte_stack.a 00:01:48.547 [276/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.547 [277/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:48.547 [278/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.547 [279/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.547 [280/710] Linking static target lib/librte_hash.a 00:01:48.547 [281/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.814 [282/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.814 [283/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.814 [284/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.814 [285/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:48.814 [286/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.814 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.814 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:49.074 [289/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.074 [290/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:49.074 [291/710] Linking static target lib/acl/libavx512_tmp.a 00:01:49.074 [292/710] Linking static target lib/librte_acl.a 00:01:49.074 [293/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:49.074 [294/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:49.074 [295/710] Linking static target lib/librte_reorder.a 00:01:49.074 [296/710] Linking static target lib/librte_power.a 00:01:49.074 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:49.074 [298/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.335 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.335 [300/710] Linking static target lib/librte_security.a 00:01:49.335 [301/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:49.335 [302/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:49.335 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:49.335 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:49.599 [305/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.599 [306/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.599 [307/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:49.599 [308/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.599 [309/710] Linking static target lib/librte_rib.a 00:01:49.599 [310/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:49.599 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:49.599 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:49.599 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:49.599 [314/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:49.860 [315/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.860 [316/710] Linking static target lib/librte_mbuf.a 00:01:49.860 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:49.860 [318/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:49.860 [319/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:49.860 [320/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.860 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:49.860 [322/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:49.860 [323/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:49.860 [324/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:49.860 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:50.125 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.125 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:50.125 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.387 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.387 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:50.387 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:50.649 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:50.649 [333/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:50.649 [334/710] Linking static target lib/librte_member.a 00:01:50.649 [335/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.913 [336/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:50.913 [337/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:50.913 [338/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:50.913 [339/710] Linking static target lib/librte_eventdev.a 00:01:50.914 [340/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.914 [341/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:50.914 [342/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.914 [343/710] Linking static target lib/librte_cryptodev.a 00:01:51.174 [344/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:51.174 [345/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:51.174 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:51.174 [347/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:51.174 [348/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.174 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:51.174 [350/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:51.174 [351/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:51.174 [352/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:51.174 [353/710] Linking static target lib/librte_ethdev.a 00:01:51.174 [354/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:51.174 [355/710] Linking static target lib/librte_fib.a 00:01:51.174 [356/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:51.174 [357/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:51.174 [358/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:51.174 [359/710] Linking static target lib/librte_sched.a 00:01:51.434 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:51.434 [361/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:51.434 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:51.434 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:51.434 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:51.696 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:51.696 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:51.696 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:51.696 [368/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.696 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:51.696 [370/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:51.981 [371/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:51.981 [372/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.981 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:52.248 [374/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:52.248 [375/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:52.248 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:52.248 [377/710] Linking static target lib/librte_pdump.a 00:01:52.248 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:52.248 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:52.248 [380/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.509 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:52.509 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:52.509 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:52.509 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:52.509 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:52.509 [386/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:52.509 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.509 [388/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:52.509 [389/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.773 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:52.773 [391/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:52.773 [392/710] Linking static target lib/librte_ipsec.a 00:01:52.773 [393/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:52.773 [394/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:52.773 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:52.773 [396/710] Linking static target lib/librte_table.a 00:01:53.033 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.033 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:53.033 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:53.033 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:53.300 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.300 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:53.561 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:53.561 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.826 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:53.826 [406/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.826 [407/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:53.826 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.826 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.826 [410/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:53.826 [411/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:53.826 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.826 [413/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.088 [414/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:54.088 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.088 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:54.088 [417/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.088 [418/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.088 [419/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.350 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.350 [421/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:54.350 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.350 [423/710] Linking target lib/librte_eal.so.24.0 00:01:54.350 [424/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.350 [425/710] Linking static target drivers/librte_bus_vdev.a 00:01:54.350 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:54.350 [427/710] Linking static target lib/librte_port.a 00:01:54.350 [428/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.611 [429/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:54.611 [430/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:54.611 [431/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.611 [432/710] Linking target lib/librte_ring.so.24.0 00:01:54.611 [433/710] Linking target lib/librte_meter.so.24.0 00:01:54.611 [434/710] Linking target lib/librte_pci.so.24.0 00:01:54.611 [435/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:54.876 [436/710] Linking target lib/librte_timer.so.24.0 00:01:54.876 [437/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.876 [438/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:54.876 [439/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:54.876 [440/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:54.876 [441/710] Linking target lib/librte_acl.so.24.0 00:01:54.876 [442/710] Linking target lib/librte_rcu.so.24.0 00:01:54.876 [443/710] Linking target lib/librte_cfgfile.so.24.0 00:01:54.876 [444/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:54.876 [445/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:54.876 [446/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:54.876 [447/710] Linking target lib/librte_mempool.so.24.0 00:01:54.876 [448/710] Linking target lib/librte_rawdev.so.24.0 00:01:54.876 [449/710] Linking target lib/librte_dmadev.so.24.0 00:01:54.876 [450/710] Linking target lib/librte_jobstats.so.24.0 00:01:54.876 [451/710] Linking static target lib/librte_graph.a 00:01:55.139 [452/710] Linking target lib/librte_stack.so.24.0 00:01:55.139 [453/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:55.139 [454/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.139 [455/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:55.139 [456/710] Linking static target drivers/librte_bus_pci.a 00:01:55.139 [457/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.139 [458/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:55.139 [459/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.139 [460/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.139 [461/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:55.139 [462/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:55.139 [463/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:55.139 [464/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:55.139 [465/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:55.139 [466/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.405 [467/710] Linking target lib/librte_rib.so.24.0 00:01:55.405 [468/710] Linking target lib/librte_mbuf.so.24.0 00:01:55.405 [469/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:55.405 [470/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:55.405 [471/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:55.405 [472/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.405 [473/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.405 [474/710] Linking static target drivers/librte_mempool_ring.a 00:01:55.405 [475/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:55.405 [476/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:55.671 [477/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.672 [478/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:55.672 [479/710] Linking target lib/librte_net.so.24.0 00:01:55.672 [480/710] Linking target lib/librte_bbdev.so.24.0 00:01:55.672 [481/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:55.672 [482/710] Linking target lib/librte_compressdev.so.24.0 00:01:55.672 [483/710] Linking target lib/librte_cryptodev.so.24.0 00:01:55.672 [484/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:55.672 [485/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:55.672 [486/710] Linking target lib/librte_distributor.so.24.0 00:01:55.672 [487/710] Linking target lib/librte_gpudev.so.24.0 00:01:55.672 [488/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:55.672 [489/710] Linking target lib/librte_regexdev.so.24.0 00:01:55.672 [490/710] Linking target lib/librte_mldev.so.24.0 00:01:55.672 [491/710] Linking target lib/librte_reorder.so.24.0 00:01:55.672 [492/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:55.672 [493/710] Linking target lib/librte_sched.so.24.0 00:01:55.672 [494/710] Linking target lib/librte_fib.so.24.0 00:01:55.672 [495/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:55.934 [496/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:55.934 [497/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:55.934 [498/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.934 [499/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:55.934 [500/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:55.934 [501/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:55.934 [502/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:55.934 [503/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:55.934 [504/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:55.934 [505/710] Linking target lib/librte_cmdline.so.24.0 00:01:55.934 [506/710] Linking target lib/librte_hash.so.24.0 00:01:55.934 [507/710] Linking target lib/librte_security.so.24.0 00:01:55.934 [508/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:55.934 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:56.200 [510/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:56.200 [511/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.200 [512/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:56.200 [513/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:56.200 [514/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:56.200 [515/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:56.200 [516/710] Linking target lib/librte_efd.so.24.0 00:01:56.200 [517/710] Linking target lib/librte_lpm.so.24.0 00:01:56.467 [518/710] Linking target lib/librte_member.so.24.0 00:01:56.467 [519/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:56.467 [520/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:56.467 [521/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:56.467 [522/710] Linking target lib/librte_ipsec.so.24.0 00:01:56.467 [523/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:56.467 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:56.727 [525/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:56.727 [526/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:56.727 [527/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:56.988 [528/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:56.988 [529/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:56.988 [530/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:56.988 [531/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:56.988 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:57.251 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:57.251 [534/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:57.251 [535/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:57.251 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:57.251 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:57.251 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:57.251 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:57.513 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:57.513 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:57.777 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:57.777 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:57.777 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:57.777 [545/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:58.037 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:58.037 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:58.037 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:58.037 [549/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:58.037 [550/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:58.037 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:58.037 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:58.037 [553/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:58.303 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:58.303 [555/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:58.303 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:58.303 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:58.564 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:58.564 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:58.830 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:59.091 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:59.091 [562/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:59.091 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:59.353 [564/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:59.353 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:59.353 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.353 [567/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:59.353 [568/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:59.353 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:59.353 [570/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:59.353 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:59.353 [572/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:59.616 [573/710] Linking target lib/librte_ethdev.so.24.0 00:01:59.616 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:59.616 [575/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:59.616 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:59.878 [577/710] Linking target lib/librte_metrics.so.24.0 00:01:59.878 [578/710] Linking target lib/librte_bpf.so.24.0 00:01:59.878 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:59.878 [580/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:59.878 [581/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:59.878 [582/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:59.878 [583/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:59.878 [584/710] Linking target lib/librte_eventdev.so.24.0 00:01:59.878 [585/710] Linking target lib/librte_gro.so.24.0 00:01:59.879 [586/710] Linking target lib/librte_gso.so.24.0 00:01:59.879 [587/710] Linking target lib/librte_ip_frag.so.24.0 00:02:00.142 [588/710] Linking target lib/librte_pcapng.so.24.0 00:02:00.142 [589/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:00.142 [590/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:00.142 [591/710] Linking target lib/librte_power.so.24.0 00:02:00.142 [592/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:00.142 [593/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:00.142 [594/710] Linking target lib/librte_bitratestats.so.24.0 00:02:00.142 [595/710] Linking target lib/librte_latencystats.so.24.0 00:02:00.142 [596/710] Linking static target lib/librte_pdcp.a 00:02:00.142 [597/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:00.142 [598/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:00.142 [599/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:00.142 [600/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:00.407 [601/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:00.407 [602/710] Linking target lib/librte_dispatcher.so.24.0 00:02:00.407 [603/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:00.407 [604/710] Linking target lib/librte_pdump.so.24.0 00:02:00.407 [605/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:00.407 [606/710] Linking target lib/librte_port.so.24.0 00:02:00.407 [607/710] Linking target lib/librte_graph.so.24.0 00:02:00.407 [608/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:00.407 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:00.407 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:00.669 [611/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:00.669 [612/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:00.669 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:00.669 [614/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.669 [615/710] Linking target lib/librte_table.so.24.0 00:02:00.669 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:00.669 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:00.669 [618/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:00.669 [619/710] Linking target lib/librte_pdcp.so.24.0 00:02:00.932 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:00.932 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:00.932 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:00.932 [623/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:00.932 [624/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:00.932 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:00.932 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:01.193 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:01.193 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:01.452 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:01.452 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:01.711 [631/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:01.711 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:01.970 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:01.970 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:01.970 [635/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:01.970 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:01.970 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:01.970 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:01.970 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:01.970 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:02.229 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:02.229 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:02.230 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:02.488 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:02.489 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:02.489 [646/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:02.489 [647/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:02.489 [648/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:02.489 [649/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:02.748 [650/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:02.748 [651/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:02.748 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:02.748 [653/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:02.748 [654/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:02.748 [655/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:03.006 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:03.264 [657/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:03.264 [658/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:03.264 [659/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:03.264 [660/710] Linking static target drivers/librte_net_i40e.a 00:02:03.264 [661/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:03.264 [662/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:03.264 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:03.522 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:03.781 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:03.781 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:03.781 [667/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.781 [668/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:03.781 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:04.039 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:04.298 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:04.556 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:04.556 [673/710] Linking static target lib/librte_node.a 00:02:04.556 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:04.814 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.814 [676/710] Linking target lib/librte_node.so.24.0 00:02:06.194 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:06.452 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:06.710 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:08.082 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:08.342 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:15.064 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.136 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.136 [684/710] Linking static target lib/librte_vhost.a 00:02:47.136 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.136 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:57.114 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:57.114 [688/710] Linking static target lib/librte_pipeline.a 00:02:57.114 [689/710] Linking target app/dpdk-test-cmdline 00:02:57.114 [690/710] Linking target app/dpdk-proc-info 00:02:57.114 [691/710] Linking target app/dpdk-dumpcap 00:02:57.114 [692/710] Linking target app/dpdk-test-acl 00:02:57.114 [693/710] Linking target app/dpdk-test-flow-perf 00:02:57.114 [694/710] Linking target app/dpdk-test-fib 00:02:57.114 [695/710] Linking target app/dpdk-pdump 00:02:57.114 [696/710] Linking target app/dpdk-test-sad 00:02:57.114 [697/710] Linking target app/dpdk-graph 00:02:57.114 [698/710] Linking target app/dpdk-test-gpudev 00:02:57.114 [699/710] Linking target app/dpdk-test-regex 00:02:57.114 [700/710] Linking target app/dpdk-test-compress-perf 00:02:57.114 [701/710] Linking target app/dpdk-test-crypto-perf 00:02:57.114 [702/710] Linking target app/dpdk-test-dma-perf 00:02:57.114 [703/710] Linking target app/dpdk-test-pipeline 00:02:57.114 [704/710] Linking target app/dpdk-test-security-perf 00:02:57.114 [705/710] Linking target app/dpdk-test-mldev 00:02:57.114 [706/710] Linking target app/dpdk-test-bbdev 00:02:57.114 [707/710] Linking target app/dpdk-test-eventdev 00:02:57.114 [708/710] Linking target app/dpdk-testpmd 00:02:59.058 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.058 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:59.058 03:02:25 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:59.058 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:59.058 [0/1] Installing files. 00:02:59.325 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:59.328 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:59.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:59.593 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.593 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.594 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.164 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.165 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.165 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.165 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.165 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.165 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:00.165 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.165 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:00.165 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.165 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:00.165 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.165 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:00.165 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:00.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:00.169 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:00.169 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:00.169 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:00.169 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:00.169 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:00.169 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:00.169 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:00.169 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:00.169 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:00.169 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:00.169 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:00.169 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:00.169 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:00.169 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:00.169 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:00.170 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:00.170 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:00.170 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:00.170 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:00.170 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:00.170 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:00.170 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:00.170 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:00.170 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:00.170 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:00.170 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:00.170 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:00.170 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:00.170 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:00.170 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:00.170 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:00.170 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:00.170 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:00.170 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:00.170 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:00.170 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:00.170 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:00.170 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:00.170 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:00.170 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:00.170 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:00.170 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:00.170 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:00.170 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:00.170 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:00.170 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:00.170 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:00.170 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:00.170 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:00.170 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:00.170 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:00.170 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:00.170 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:00.170 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:00.170 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:00.170 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:00.170 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:00.170 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:00.170 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:00.170 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:00.170 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:00.170 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:00.170 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:00.170 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:00.170 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:00.170 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:00.170 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:00.170 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:00.170 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:00.170 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:00.170 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:00.170 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:00.170 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:00.170 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:00.170 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:00.170 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:00.170 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:00.170 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:00.170 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:00.170 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:00.170 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:00.170 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:00.170 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:00.170 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:00.170 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:00.170 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:00.170 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:00.170 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:00.170 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:00.170 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:00.170 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:00.170 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:00.170 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:00.170 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:00.170 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:00.170 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:00.170 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:00.170 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:00.170 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:00.170 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:00.170 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:00.170 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:00.170 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:00.170 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:00.170 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:00.170 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:00.170 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:00.170 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:00.171 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:00.171 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:00.171 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:00.171 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:00.171 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:00.171 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:00.171 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:00.171 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:00.171 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:00.171 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:00.171 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:00.171 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:00.171 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:00.171 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:00.171 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:00.171 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:00.171 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:00.171 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:00.171 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:00.171 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:00.171 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:00.171 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:00.171 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:00.171 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:00.171 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:00.171 03:02:26 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:00.171 03:02:26 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:00.171 03:02:26 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:00.171 03:02:26 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.171 00:03:00.171 real 1m25.575s 00:03:00.171 user 17m57.045s 00:03:00.171 sys 2m6.159s 00:03:00.171 03:02:26 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:00.171 03:02:26 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:00.171 ************************************ 00:03:00.171 END TEST build_native_dpdk 00:03:00.171 ************************************ 00:03:00.171 03:02:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:00.171 03:02:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:00.171 03:02:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:00.171 03:02:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:00.171 03:02:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:00.171 03:02:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:00.171 03:02:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:00.171 03:02:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:00.429 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:00.429 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.430 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.430 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:00.687 Using 'verbs' RDMA provider 00:03:11.224 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:21.197 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:21.197 Creating mk/config.mk...done. 00:03:21.197 Creating mk/cc.flags.mk...done. 00:03:21.197 Type 'make' to build. 00:03:21.197 03:02:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:21.197 03:02:46 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:21.197 03:02:46 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:21.197 03:02:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.197 ************************************ 00:03:21.197 START TEST make 00:03:21.197 ************************************ 00:03:21.197 03:02:46 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:21.197 make[1]: Nothing to be done for 'all'. 00:03:21.768 The Meson build system 00:03:21.768 Version: 1.3.1 00:03:21.768 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:21.768 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:21.768 Build type: native build 00:03:21.768 Project name: libvfio-user 00:03:21.768 Project version: 0.0.1 00:03:21.768 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:21.768 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:21.768 Host machine cpu family: x86_64 00:03:21.768 Host machine cpu: x86_64 00:03:21.768 Run-time dependency threads found: YES 00:03:21.768 Library dl found: YES 00:03:21.768 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:21.768 Run-time dependency json-c found: YES 0.17 00:03:21.768 Run-time dependency cmocka found: YES 1.1.7 00:03:21.768 Program pytest-3 found: NO 00:03:21.768 Program flake8 found: NO 00:03:21.768 Program misspell-fixer found: NO 00:03:21.768 Program restructuredtext-lint found: NO 00:03:21.768 Program valgrind found: YES (/usr/bin/valgrind) 00:03:21.768 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:21.768 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:21.768 Compiler for C supports arguments -Wwrite-strings: YES 00:03:21.768 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:21.768 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:21.768 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:21.768 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:21.768 Build targets in project: 8 00:03:21.768 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:21.768 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:21.768 00:03:21.768 libvfio-user 0.0.1 00:03:21.768 00:03:21.768 User defined options 00:03:21.768 buildtype : debug 00:03:21.768 default_library: shared 00:03:21.768 libdir : /usr/local/lib 00:03:21.768 00:03:21.768 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:22.715 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:22.715 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:22.715 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:22.715 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:22.715 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:22.715 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:22.715 [6/37] Compiling C object samples/null.p/null.c.o 00:03:22.715 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:22.715 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:22.975 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:22.975 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:22.975 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:22.975 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:22.975 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:22.975 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:22.975 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:22.975 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:22.975 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:22.975 [18/37] Compiling C object samples/server.p/server.c.o 00:03:22.975 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:22.975 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:22.975 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:22.975 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:22.975 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:22.975 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:22.975 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:22.975 [26/37] Compiling C object samples/client.p/client.c.o 00:03:22.975 [27/37] Linking target samples/client 00:03:23.237 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:23.237 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:23.237 [30/37] Linking target test/unit_tests 00:03:23.237 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:23.502 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:23.502 [33/37] Linking target samples/server 00:03:23.502 [34/37] Linking target samples/gpio-pci-idio-16 00:03:23.502 [35/37] Linking target samples/lspci 00:03:23.502 [36/37] Linking target samples/null 00:03:23.502 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:23.502 INFO: autodetecting backend as ninja 00:03:23.502 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:23.761 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:24.335 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:24.335 ninja: no work to do. 00:03:36.584 CC lib/log/log.o 00:03:36.584 CC lib/log/log_flags.o 00:03:36.584 CC lib/log/log_deprecated.o 00:03:36.584 CC lib/ut/ut.o 00:03:36.584 CC lib/ut_mock/mock.o 00:03:36.584 LIB libspdk_log.a 00:03:36.584 LIB libspdk_ut.a 00:03:36.584 LIB libspdk_ut_mock.a 00:03:36.584 SO libspdk_log.so.7.0 00:03:36.584 SO libspdk_ut_mock.so.6.0 00:03:36.584 SO libspdk_ut.so.2.0 00:03:36.584 SYMLINK libspdk_ut_mock.so 00:03:36.584 SYMLINK libspdk_ut.so 00:03:36.584 SYMLINK libspdk_log.so 00:03:36.584 CC lib/dma/dma.o 00:03:36.584 CC lib/ioat/ioat.o 00:03:36.584 CXX lib/trace_parser/trace.o 00:03:36.584 CC lib/util/base64.o 00:03:36.584 CC lib/util/bit_array.o 00:03:36.584 CC lib/util/cpuset.o 00:03:36.584 CC lib/util/crc16.o 00:03:36.584 CC lib/util/crc32.o 00:03:36.584 CC lib/util/crc32c.o 00:03:36.584 CC lib/util/crc32_ieee.o 00:03:36.584 CC lib/util/crc64.o 00:03:36.584 CC lib/util/dif.o 00:03:36.584 CC lib/util/fd.o 00:03:36.584 CC lib/util/file.o 00:03:36.584 CC lib/util/hexlify.o 00:03:36.584 CC lib/util/iov.o 00:03:36.584 CC lib/util/math.o 00:03:36.584 CC lib/util/pipe.o 00:03:36.584 CC lib/util/strerror_tls.o 00:03:36.584 CC lib/util/string.o 00:03:36.584 CC lib/util/uuid.o 00:03:36.584 CC lib/util/fd_group.o 00:03:36.584 CC lib/util/xor.o 00:03:36.584 CC lib/util/zipf.o 00:03:36.584 CC lib/vfio_user/host/vfio_user_pci.o 00:03:36.584 CC lib/vfio_user/host/vfio_user.o 00:03:36.584 LIB libspdk_dma.a 00:03:36.584 SO libspdk_dma.so.4.0 00:03:36.584 LIB libspdk_ioat.a 00:03:36.584 SO libspdk_ioat.so.7.0 00:03:36.584 SYMLINK libspdk_dma.so 00:03:36.584 LIB libspdk_vfio_user.a 00:03:36.584 SYMLINK libspdk_ioat.so 00:03:36.584 SO libspdk_vfio_user.so.5.0 00:03:36.584 SYMLINK libspdk_vfio_user.so 00:03:36.584 LIB libspdk_util.a 00:03:36.842 SO libspdk_util.so.9.0 00:03:36.842 SYMLINK libspdk_util.so 00:03:37.099 CC lib/idxd/idxd.o 00:03:37.099 CC lib/vmd/vmd.o 00:03:37.099 CC lib/json/json_parse.o 00:03:37.099 CC lib/idxd/idxd_user.o 00:03:37.099 CC lib/json/json_util.o 00:03:37.099 CC lib/vmd/led.o 00:03:37.099 CC lib/rdma/common.o 00:03:37.099 CC lib/env_dpdk/env.o 00:03:37.099 CC lib/idxd/idxd_kernel.o 00:03:37.099 CC lib/conf/conf.o 00:03:37.099 CC lib/json/json_write.o 00:03:37.099 CC lib/rdma/rdma_verbs.o 00:03:37.099 CC lib/env_dpdk/memory.o 00:03:37.099 CC lib/env_dpdk/pci.o 00:03:37.099 CC lib/env_dpdk/init.o 00:03:37.099 CC lib/env_dpdk/threads.o 00:03:37.099 CC lib/env_dpdk/pci_ioat.o 00:03:37.099 CC lib/env_dpdk/pci_virtio.o 00:03:37.099 CC lib/env_dpdk/pci_vmd.o 00:03:37.099 CC lib/env_dpdk/pci_idxd.o 00:03:37.099 CC lib/env_dpdk/pci_event.o 00:03:37.099 CC lib/env_dpdk/sigbus_handler.o 00:03:37.099 CC lib/env_dpdk/pci_dpdk.o 00:03:37.099 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:37.099 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:37.099 LIB libspdk_trace_parser.a 00:03:37.099 SO libspdk_trace_parser.so.5.0 00:03:37.356 SYMLINK libspdk_trace_parser.so 00:03:37.356 LIB libspdk_conf.a 00:03:37.356 SO libspdk_conf.so.6.0 00:03:37.356 LIB libspdk_rdma.a 00:03:37.356 LIB libspdk_json.a 00:03:37.356 SYMLINK libspdk_conf.so 00:03:37.356 SO libspdk_rdma.so.6.0 00:03:37.356 SO libspdk_json.so.6.0 00:03:37.614 SYMLINK libspdk_rdma.so 00:03:37.614 SYMLINK libspdk_json.so 00:03:37.614 CC lib/jsonrpc/jsonrpc_server.o 00:03:37.614 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:37.614 CC lib/jsonrpc/jsonrpc_client.o 00:03:37.614 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:37.614 LIB libspdk_idxd.a 00:03:37.614 SO libspdk_idxd.so.12.0 00:03:37.872 LIB libspdk_vmd.a 00:03:37.872 SO libspdk_vmd.so.6.0 00:03:37.872 SYMLINK libspdk_idxd.so 00:03:37.872 SYMLINK libspdk_vmd.so 00:03:37.872 LIB libspdk_jsonrpc.a 00:03:37.872 SO libspdk_jsonrpc.so.6.0 00:03:38.130 SYMLINK libspdk_jsonrpc.so 00:03:38.130 CC lib/rpc/rpc.o 00:03:38.388 LIB libspdk_rpc.a 00:03:38.388 SO libspdk_rpc.so.6.0 00:03:38.388 SYMLINK libspdk_rpc.so 00:03:38.647 CC lib/notify/notify.o 00:03:38.647 CC lib/notify/notify_rpc.o 00:03:38.647 CC lib/trace/trace.o 00:03:38.647 CC lib/trace/trace_flags.o 00:03:38.647 CC lib/trace/trace_rpc.o 00:03:38.647 CC lib/keyring/keyring.o 00:03:38.647 CC lib/keyring/keyring_rpc.o 00:03:38.906 LIB libspdk_notify.a 00:03:38.906 SO libspdk_notify.so.6.0 00:03:38.906 LIB libspdk_keyring.a 00:03:38.906 SYMLINK libspdk_notify.so 00:03:38.906 LIB libspdk_trace.a 00:03:38.906 SO libspdk_keyring.so.1.0 00:03:38.906 SO libspdk_trace.so.10.0 00:03:38.906 SYMLINK libspdk_keyring.so 00:03:38.906 SYMLINK libspdk_trace.so 00:03:39.164 LIB libspdk_env_dpdk.a 00:03:39.164 SO libspdk_env_dpdk.so.14.0 00:03:39.164 CC lib/sock/sock.o 00:03:39.164 CC lib/sock/sock_rpc.o 00:03:39.164 CC lib/thread/thread.o 00:03:39.164 CC lib/thread/iobuf.o 00:03:39.422 SYMLINK libspdk_env_dpdk.so 00:03:39.680 LIB libspdk_sock.a 00:03:39.680 SO libspdk_sock.so.9.0 00:03:39.680 SYMLINK libspdk_sock.so 00:03:39.939 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:39.939 CC lib/nvme/nvme_ctrlr.o 00:03:39.939 CC lib/nvme/nvme_fabric.o 00:03:39.939 CC lib/nvme/nvme_ns_cmd.o 00:03:39.939 CC lib/nvme/nvme_ns.o 00:03:39.939 CC lib/nvme/nvme_pcie_common.o 00:03:39.939 CC lib/nvme/nvme_pcie.o 00:03:39.939 CC lib/nvme/nvme_qpair.o 00:03:39.939 CC lib/nvme/nvme.o 00:03:39.939 CC lib/nvme/nvme_quirks.o 00:03:39.939 CC lib/nvme/nvme_transport.o 00:03:39.939 CC lib/nvme/nvme_discovery.o 00:03:39.939 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:39.939 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:39.939 CC lib/nvme/nvme_tcp.o 00:03:39.939 CC lib/nvme/nvme_opal.o 00:03:39.939 CC lib/nvme/nvme_io_msg.o 00:03:39.939 CC lib/nvme/nvme_poll_group.o 00:03:39.939 CC lib/nvme/nvme_zns.o 00:03:39.939 CC lib/nvme/nvme_stubs.o 00:03:39.939 CC lib/nvme/nvme_auth.o 00:03:39.939 CC lib/nvme/nvme_cuse.o 00:03:39.939 CC lib/nvme/nvme_vfio_user.o 00:03:39.939 CC lib/nvme/nvme_rdma.o 00:03:40.872 LIB libspdk_thread.a 00:03:40.872 SO libspdk_thread.so.10.0 00:03:40.872 SYMLINK libspdk_thread.so 00:03:41.130 CC lib/vfu_tgt/tgt_endpoint.o 00:03:41.130 CC lib/blob/blobstore.o 00:03:41.130 CC lib/init/json_config.o 00:03:41.130 CC lib/virtio/virtio.o 00:03:41.130 CC lib/accel/accel.o 00:03:41.130 CC lib/vfu_tgt/tgt_rpc.o 00:03:41.130 CC lib/init/subsystem.o 00:03:41.130 CC lib/blob/request.o 00:03:41.130 CC lib/virtio/virtio_vhost_user.o 00:03:41.130 CC lib/accel/accel_rpc.o 00:03:41.131 CC lib/init/subsystem_rpc.o 00:03:41.131 CC lib/blob/zeroes.o 00:03:41.131 CC lib/accel/accel_sw.o 00:03:41.131 CC lib/virtio/virtio_vfio_user.o 00:03:41.131 CC lib/init/rpc.o 00:03:41.131 CC lib/blob/blob_bs_dev.o 00:03:41.131 CC lib/virtio/virtio_pci.o 00:03:41.388 LIB libspdk_init.a 00:03:41.388 SO libspdk_init.so.5.0 00:03:41.388 LIB libspdk_vfu_tgt.a 00:03:41.388 LIB libspdk_virtio.a 00:03:41.388 SYMLINK libspdk_init.so 00:03:41.388 SO libspdk_vfu_tgt.so.3.0 00:03:41.388 SO libspdk_virtio.so.7.0 00:03:41.388 SYMLINK libspdk_vfu_tgt.so 00:03:41.388 SYMLINK libspdk_virtio.so 00:03:41.646 CC lib/event/app.o 00:03:41.646 CC lib/event/reactor.o 00:03:41.646 CC lib/event/log_rpc.o 00:03:41.646 CC lib/event/app_rpc.o 00:03:41.646 CC lib/event/scheduler_static.o 00:03:41.904 LIB libspdk_event.a 00:03:41.904 SO libspdk_event.so.13.0 00:03:42.162 SYMLINK libspdk_event.so 00:03:42.162 LIB libspdk_accel.a 00:03:42.162 SO libspdk_accel.so.15.0 00:03:42.162 LIB libspdk_nvme.a 00:03:42.162 SYMLINK libspdk_accel.so 00:03:42.420 SO libspdk_nvme.so.13.0 00:03:42.420 CC lib/bdev/bdev.o 00:03:42.420 CC lib/bdev/bdev_rpc.o 00:03:42.420 CC lib/bdev/bdev_zone.o 00:03:42.420 CC lib/bdev/part.o 00:03:42.420 CC lib/bdev/scsi_nvme.o 00:03:42.678 SYMLINK libspdk_nvme.so 00:03:44.052 LIB libspdk_blob.a 00:03:44.052 SO libspdk_blob.so.11.0 00:03:44.310 SYMLINK libspdk_blob.so 00:03:44.310 CC lib/lvol/lvol.o 00:03:44.310 CC lib/blobfs/blobfs.o 00:03:44.310 CC lib/blobfs/tree.o 00:03:45.244 LIB libspdk_bdev.a 00:03:45.244 LIB libspdk_blobfs.a 00:03:45.244 SO libspdk_bdev.so.15.0 00:03:45.244 SO libspdk_blobfs.so.10.0 00:03:45.244 LIB libspdk_lvol.a 00:03:45.244 SYMLINK libspdk_blobfs.so 00:03:45.244 SYMLINK libspdk_bdev.so 00:03:45.244 SO libspdk_lvol.so.10.0 00:03:45.244 SYMLINK libspdk_lvol.so 00:03:45.511 CC lib/ftl/ftl_core.o 00:03:45.511 CC lib/nvmf/ctrlr.o 00:03:45.511 CC lib/ublk/ublk.o 00:03:45.511 CC lib/nbd/nbd.o 00:03:45.511 CC lib/ublk/ublk_rpc.o 00:03:45.511 CC lib/scsi/dev.o 00:03:45.511 CC lib/nbd/nbd_rpc.o 00:03:45.511 CC lib/nvmf/ctrlr_discovery.o 00:03:45.511 CC lib/ftl/ftl_init.o 00:03:45.511 CC lib/scsi/lun.o 00:03:45.511 CC lib/nvmf/ctrlr_bdev.o 00:03:45.511 CC lib/ftl/ftl_layout.o 00:03:45.511 CC lib/scsi/port.o 00:03:45.511 CC lib/nvmf/subsystem.o 00:03:45.511 CC lib/scsi/scsi.o 00:03:45.511 CC lib/nvmf/nvmf.o 00:03:45.511 CC lib/ftl/ftl_debug.o 00:03:45.511 CC lib/ftl/ftl_io.o 00:03:45.511 CC lib/scsi/scsi_bdev.o 00:03:45.511 CC lib/nvmf/transport.o 00:03:45.511 CC lib/nvmf/nvmf_rpc.o 00:03:45.511 CC lib/ftl/ftl_sb.o 00:03:45.511 CC lib/scsi/scsi_pr.o 00:03:45.511 CC lib/nvmf/tcp.o 00:03:45.511 CC lib/ftl/ftl_l2p.o 00:03:45.511 CC lib/scsi/scsi_rpc.o 00:03:45.511 CC lib/nvmf/stubs.o 00:03:45.511 CC lib/nvmf/mdns_server.o 00:03:45.511 CC lib/ftl/ftl_l2p_flat.o 00:03:45.511 CC lib/ftl/ftl_nv_cache.o 00:03:45.511 CC lib/nvmf/vfio_user.o 00:03:45.511 CC lib/scsi/task.o 00:03:45.511 CC lib/ftl/ftl_band.o 00:03:45.511 CC lib/nvmf/rdma.o 00:03:45.511 CC lib/ftl/ftl_band_ops.o 00:03:45.511 CC lib/nvmf/auth.o 00:03:45.511 CC lib/ftl/ftl_writer.o 00:03:45.511 CC lib/ftl/ftl_rq.o 00:03:45.511 CC lib/ftl/ftl_reloc.o 00:03:45.511 CC lib/ftl/ftl_l2p_cache.o 00:03:45.511 CC lib/ftl/ftl_p2l.o 00:03:45.511 CC lib/ftl/mngt/ftl_mngt.o 00:03:45.511 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:45.511 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:45.511 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:45.511 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:45.511 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:45.511 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:45.773 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:45.773 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:45.773 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:45.773 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:45.773 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:45.773 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:45.773 CC lib/ftl/utils/ftl_conf.o 00:03:46.036 CC lib/ftl/utils/ftl_md.o 00:03:46.036 CC lib/ftl/utils/ftl_mempool.o 00:03:46.036 CC lib/ftl/utils/ftl_bitmap.o 00:03:46.036 CC lib/ftl/utils/ftl_property.o 00:03:46.036 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:46.036 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:46.036 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:46.036 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:46.036 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:46.036 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:46.036 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:46.036 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:46.036 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:46.036 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:46.036 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:46.037 CC lib/ftl/base/ftl_base_dev.o 00:03:46.037 CC lib/ftl/base/ftl_base_bdev.o 00:03:46.299 CC lib/ftl/ftl_trace.o 00:03:46.299 LIB libspdk_nbd.a 00:03:46.299 SO libspdk_nbd.so.7.0 00:03:46.299 LIB libspdk_scsi.a 00:03:46.299 SYMLINK libspdk_nbd.so 00:03:46.557 SO libspdk_scsi.so.9.0 00:03:46.557 SYMLINK libspdk_scsi.so 00:03:46.557 LIB libspdk_ublk.a 00:03:46.557 SO libspdk_ublk.so.3.0 00:03:46.557 SYMLINK libspdk_ublk.so 00:03:46.816 CC lib/vhost/vhost.o 00:03:46.816 CC lib/iscsi/conn.o 00:03:46.816 CC lib/iscsi/init_grp.o 00:03:46.816 CC lib/vhost/vhost_rpc.o 00:03:46.816 CC lib/vhost/vhost_scsi.o 00:03:46.816 CC lib/iscsi/iscsi.o 00:03:46.816 CC lib/vhost/vhost_blk.o 00:03:46.816 CC lib/iscsi/md5.o 00:03:46.816 CC lib/vhost/rte_vhost_user.o 00:03:46.816 CC lib/iscsi/param.o 00:03:46.816 CC lib/iscsi/portal_grp.o 00:03:46.816 CC lib/iscsi/tgt_node.o 00:03:46.816 CC lib/iscsi/iscsi_subsystem.o 00:03:46.816 CC lib/iscsi/iscsi_rpc.o 00:03:46.816 CC lib/iscsi/task.o 00:03:46.816 LIB libspdk_ftl.a 00:03:47.073 SO libspdk_ftl.so.9.0 00:03:47.331 SYMLINK libspdk_ftl.so 00:03:47.897 LIB libspdk_vhost.a 00:03:47.897 SO libspdk_vhost.so.8.0 00:03:48.156 SYMLINK libspdk_vhost.so 00:03:48.156 LIB libspdk_nvmf.a 00:03:48.156 LIB libspdk_iscsi.a 00:03:48.156 SO libspdk_nvmf.so.18.0 00:03:48.156 SO libspdk_iscsi.so.8.0 00:03:48.414 SYMLINK libspdk_iscsi.so 00:03:48.414 SYMLINK libspdk_nvmf.so 00:03:48.673 CC module/env_dpdk/env_dpdk_rpc.o 00:03:48.673 CC module/vfu_device/vfu_virtio.o 00:03:48.673 CC module/vfu_device/vfu_virtio_blk.o 00:03:48.673 CC module/vfu_device/vfu_virtio_scsi.o 00:03:48.673 CC module/vfu_device/vfu_virtio_rpc.o 00:03:48.673 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:48.673 CC module/accel/ioat/accel_ioat.o 00:03:48.673 CC module/accel/ioat/accel_ioat_rpc.o 00:03:48.673 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:48.673 CC module/accel/error/accel_error.o 00:03:48.673 CC module/accel/error/accel_error_rpc.o 00:03:48.673 CC module/keyring/file/keyring.o 00:03:48.673 CC module/blob/bdev/blob_bdev.o 00:03:48.673 CC module/keyring/linux/keyring.o 00:03:48.673 CC module/keyring/linux/keyring_rpc.o 00:03:48.673 CC module/keyring/file/keyring_rpc.o 00:03:48.673 CC module/scheduler/gscheduler/gscheduler.o 00:03:48.673 CC module/accel/dsa/accel_dsa.o 00:03:48.673 CC module/accel/dsa/accel_dsa_rpc.o 00:03:48.673 CC module/accel/iaa/accel_iaa.o 00:03:48.673 CC module/sock/posix/posix.o 00:03:48.673 CC module/accel/iaa/accel_iaa_rpc.o 00:03:48.673 LIB libspdk_env_dpdk_rpc.a 00:03:48.673 SO libspdk_env_dpdk_rpc.so.6.0 00:03:48.932 SYMLINK libspdk_env_dpdk_rpc.so 00:03:48.932 LIB libspdk_keyring_file.a 00:03:48.932 LIB libspdk_keyring_linux.a 00:03:48.932 LIB libspdk_scheduler_dpdk_governor.a 00:03:48.932 LIB libspdk_scheduler_gscheduler.a 00:03:48.932 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:48.932 SO libspdk_keyring_linux.so.1.0 00:03:48.932 SO libspdk_keyring_file.so.1.0 00:03:48.932 SO libspdk_scheduler_gscheduler.so.4.0 00:03:48.932 LIB libspdk_accel_error.a 00:03:48.932 LIB libspdk_accel_ioat.a 00:03:48.932 LIB libspdk_scheduler_dynamic.a 00:03:48.932 LIB libspdk_accel_iaa.a 00:03:48.932 SO libspdk_accel_error.so.2.0 00:03:48.932 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:48.932 SO libspdk_accel_ioat.so.6.0 00:03:48.932 SO libspdk_scheduler_dynamic.so.4.0 00:03:48.932 SYMLINK libspdk_scheduler_gscheduler.so 00:03:48.932 SYMLINK libspdk_keyring_linux.so 00:03:48.932 SYMLINK libspdk_keyring_file.so 00:03:48.932 SO libspdk_accel_iaa.so.3.0 00:03:48.932 LIB libspdk_accel_dsa.a 00:03:48.932 SYMLINK libspdk_scheduler_dynamic.so 00:03:48.932 SYMLINK libspdk_accel_error.so 00:03:48.932 SYMLINK libspdk_accel_ioat.so 00:03:48.932 LIB libspdk_blob_bdev.a 00:03:48.932 SO libspdk_accel_dsa.so.5.0 00:03:48.932 SYMLINK libspdk_accel_iaa.so 00:03:48.932 SO libspdk_blob_bdev.so.11.0 00:03:49.190 SYMLINK libspdk_accel_dsa.so 00:03:49.190 SYMLINK libspdk_blob_bdev.so 00:03:49.190 LIB libspdk_vfu_device.a 00:03:49.190 SO libspdk_vfu_device.so.3.0 00:03:49.449 CC module/blobfs/bdev/blobfs_bdev.o 00:03:49.449 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:49.449 CC module/bdev/error/vbdev_error.o 00:03:49.449 CC module/bdev/gpt/gpt.o 00:03:49.449 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:49.449 CC module/bdev/error/vbdev_error_rpc.o 00:03:49.449 CC module/bdev/passthru/vbdev_passthru.o 00:03:49.449 CC module/bdev/delay/vbdev_delay.o 00:03:49.449 CC module/bdev/gpt/vbdev_gpt.o 00:03:49.449 CC module/bdev/lvol/vbdev_lvol.o 00:03:49.449 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:49.449 CC module/bdev/null/bdev_null.o 00:03:49.449 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:49.449 CC module/bdev/null/bdev_null_rpc.o 00:03:49.449 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:49.449 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:49.449 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:49.449 CC module/bdev/nvme/bdev_nvme.o 00:03:49.449 CC module/bdev/ftl/bdev_ftl.o 00:03:49.449 CC module/bdev/split/vbdev_split.o 00:03:49.449 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:49.449 CC module/bdev/aio/bdev_aio.o 00:03:49.449 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:49.449 CC module/bdev/malloc/bdev_malloc.o 00:03:49.449 CC module/bdev/split/vbdev_split_rpc.o 00:03:49.449 CC module/bdev/raid/bdev_raid.o 00:03:49.449 CC module/bdev/nvme/bdev_mdns_client.o 00:03:49.449 CC module/bdev/nvme/nvme_rpc.o 00:03:49.449 CC module/bdev/aio/bdev_aio_rpc.o 00:03:49.449 CC module/bdev/raid/bdev_raid_rpc.o 00:03:49.449 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:49.449 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:49.449 CC module/bdev/raid/bdev_raid_sb.o 00:03:49.449 CC module/bdev/nvme/vbdev_opal.o 00:03:49.449 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:49.449 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:49.449 CC module/bdev/raid/raid0.o 00:03:49.449 CC module/bdev/raid/raid1.o 00:03:49.449 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:49.449 CC module/bdev/iscsi/bdev_iscsi.o 00:03:49.449 CC module/bdev/raid/concat.o 00:03:49.449 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:49.449 SYMLINK libspdk_vfu_device.so 00:03:49.709 LIB libspdk_sock_posix.a 00:03:49.709 SO libspdk_sock_posix.so.6.0 00:03:49.709 SYMLINK libspdk_sock_posix.so 00:03:49.709 LIB libspdk_blobfs_bdev.a 00:03:49.709 SO libspdk_blobfs_bdev.so.6.0 00:03:49.709 LIB libspdk_bdev_null.a 00:03:49.709 LIB libspdk_bdev_split.a 00:03:49.709 SO libspdk_bdev_null.so.6.0 00:03:49.709 SYMLINK libspdk_blobfs_bdev.so 00:03:50.003 LIB libspdk_bdev_aio.a 00:03:50.003 SO libspdk_bdev_split.so.6.0 00:03:50.003 SO libspdk_bdev_aio.so.6.0 00:03:50.003 LIB libspdk_bdev_error.a 00:03:50.003 LIB libspdk_bdev_ftl.a 00:03:50.003 LIB libspdk_bdev_gpt.a 00:03:50.003 SYMLINK libspdk_bdev_null.so 00:03:50.003 LIB libspdk_bdev_passthru.a 00:03:50.003 SO libspdk_bdev_error.so.6.0 00:03:50.003 SYMLINK libspdk_bdev_split.so 00:03:50.003 SO libspdk_bdev_gpt.so.6.0 00:03:50.003 SO libspdk_bdev_ftl.so.6.0 00:03:50.003 SO libspdk_bdev_passthru.so.6.0 00:03:50.003 SYMLINK libspdk_bdev_aio.so 00:03:50.003 SYMLINK libspdk_bdev_error.so 00:03:50.003 LIB libspdk_bdev_delay.a 00:03:50.003 SYMLINK libspdk_bdev_gpt.so 00:03:50.003 SYMLINK libspdk_bdev_ftl.so 00:03:50.003 LIB libspdk_bdev_zone_block.a 00:03:50.003 LIB libspdk_bdev_iscsi.a 00:03:50.003 SYMLINK libspdk_bdev_passthru.so 00:03:50.003 LIB libspdk_bdev_malloc.a 00:03:50.003 SO libspdk_bdev_delay.so.6.0 00:03:50.003 SO libspdk_bdev_zone_block.so.6.0 00:03:50.003 SO libspdk_bdev_iscsi.so.6.0 00:03:50.003 SO libspdk_bdev_malloc.so.6.0 00:03:50.003 SYMLINK libspdk_bdev_delay.so 00:03:50.003 LIB libspdk_bdev_virtio.a 00:03:50.003 SYMLINK libspdk_bdev_zone_block.so 00:03:50.003 LIB libspdk_bdev_lvol.a 00:03:50.003 SYMLINK libspdk_bdev_iscsi.so 00:03:50.003 SYMLINK libspdk_bdev_malloc.so 00:03:50.003 SO libspdk_bdev_virtio.so.6.0 00:03:50.003 SO libspdk_bdev_lvol.so.6.0 00:03:50.261 SYMLINK libspdk_bdev_virtio.so 00:03:50.261 SYMLINK libspdk_bdev_lvol.so 00:03:50.518 LIB libspdk_bdev_raid.a 00:03:50.518 SO libspdk_bdev_raid.so.6.0 00:03:50.518 SYMLINK libspdk_bdev_raid.so 00:03:51.893 LIB libspdk_bdev_nvme.a 00:03:51.893 SO libspdk_bdev_nvme.so.7.0 00:03:51.893 SYMLINK libspdk_bdev_nvme.so 00:03:52.151 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:52.151 CC module/event/subsystems/iobuf/iobuf.o 00:03:52.151 CC module/event/subsystems/vmd/vmd.o 00:03:52.151 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:52.151 CC module/event/subsystems/keyring/keyring.o 00:03:52.151 CC module/event/subsystems/scheduler/scheduler.o 00:03:52.151 CC module/event/subsystems/sock/sock.o 00:03:52.151 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:52.151 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:52.409 LIB libspdk_event_keyring.a 00:03:52.409 LIB libspdk_event_vhost_blk.a 00:03:52.409 LIB libspdk_event_sock.a 00:03:52.409 LIB libspdk_event_vfu_tgt.a 00:03:52.409 LIB libspdk_event_scheduler.a 00:03:52.409 LIB libspdk_event_vmd.a 00:03:52.409 LIB libspdk_event_iobuf.a 00:03:52.409 SO libspdk_event_keyring.so.1.0 00:03:52.409 SO libspdk_event_vhost_blk.so.3.0 00:03:52.409 SO libspdk_event_vfu_tgt.so.3.0 00:03:52.409 SO libspdk_event_sock.so.5.0 00:03:52.409 SO libspdk_event_scheduler.so.4.0 00:03:52.409 SO libspdk_event_vmd.so.6.0 00:03:52.409 SO libspdk_event_iobuf.so.3.0 00:03:52.409 SYMLINK libspdk_event_keyring.so 00:03:52.409 SYMLINK libspdk_event_vhost_blk.so 00:03:52.409 SYMLINK libspdk_event_vfu_tgt.so 00:03:52.409 SYMLINK libspdk_event_sock.so 00:03:52.409 SYMLINK libspdk_event_scheduler.so 00:03:52.409 SYMLINK libspdk_event_vmd.so 00:03:52.409 SYMLINK libspdk_event_iobuf.so 00:03:52.667 CC module/event/subsystems/accel/accel.o 00:03:52.667 LIB libspdk_event_accel.a 00:03:52.927 SO libspdk_event_accel.so.6.0 00:03:52.927 SYMLINK libspdk_event_accel.so 00:03:52.927 CC module/event/subsystems/bdev/bdev.o 00:03:53.185 LIB libspdk_event_bdev.a 00:03:53.185 SO libspdk_event_bdev.so.6.0 00:03:53.185 SYMLINK libspdk_event_bdev.so 00:03:53.444 CC module/event/subsystems/ublk/ublk.o 00:03:53.444 CC module/event/subsystems/nbd/nbd.o 00:03:53.444 CC module/event/subsystems/scsi/scsi.o 00:03:53.444 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:53.444 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:53.712 LIB libspdk_event_nbd.a 00:03:53.712 LIB libspdk_event_ublk.a 00:03:53.712 LIB libspdk_event_scsi.a 00:03:53.712 SO libspdk_event_nbd.so.6.0 00:03:53.712 SO libspdk_event_ublk.so.3.0 00:03:53.712 SO libspdk_event_scsi.so.6.0 00:03:53.712 SYMLINK libspdk_event_nbd.so 00:03:53.712 SYMLINK libspdk_event_ublk.so 00:03:53.712 SYMLINK libspdk_event_scsi.so 00:03:53.712 LIB libspdk_event_nvmf.a 00:03:53.712 SO libspdk_event_nvmf.so.6.0 00:03:53.712 SYMLINK libspdk_event_nvmf.so 00:03:53.970 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:53.970 CC module/event/subsystems/iscsi/iscsi.o 00:03:53.970 LIB libspdk_event_vhost_scsi.a 00:03:53.970 LIB libspdk_event_iscsi.a 00:03:53.970 SO libspdk_event_vhost_scsi.so.3.0 00:03:53.970 SO libspdk_event_iscsi.so.6.0 00:03:53.970 SYMLINK libspdk_event_vhost_scsi.so 00:03:54.232 SYMLINK libspdk_event_iscsi.so 00:03:54.232 SO libspdk.so.6.0 00:03:54.232 SYMLINK libspdk.so 00:03:54.495 CXX app/trace/trace.o 00:03:54.495 CC app/spdk_lspci/spdk_lspci.o 00:03:54.496 CC app/trace_record/trace_record.o 00:03:54.496 CC app/spdk_nvme_discover/discovery_aer.o 00:03:54.496 CC app/spdk_nvme_perf/perf.o 00:03:54.496 TEST_HEADER include/spdk/accel.h 00:03:54.496 CC app/spdk_nvme_identify/identify.o 00:03:54.496 TEST_HEADER include/spdk/accel_module.h 00:03:54.496 CC app/spdk_top/spdk_top.o 00:03:54.496 TEST_HEADER include/spdk/assert.h 00:03:54.496 CC test/rpc_client/rpc_client_test.o 00:03:54.496 TEST_HEADER include/spdk/barrier.h 00:03:54.496 TEST_HEADER include/spdk/base64.h 00:03:54.496 TEST_HEADER include/spdk/bdev.h 00:03:54.496 TEST_HEADER include/spdk/bdev_module.h 00:03:54.496 TEST_HEADER include/spdk/bdev_zone.h 00:03:54.496 TEST_HEADER include/spdk/bit_array.h 00:03:54.496 TEST_HEADER include/spdk/bit_pool.h 00:03:54.496 TEST_HEADER include/spdk/blob_bdev.h 00:03:54.496 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:54.496 TEST_HEADER include/spdk/blobfs.h 00:03:54.496 TEST_HEADER include/spdk/blob.h 00:03:54.496 TEST_HEADER include/spdk/conf.h 00:03:54.496 TEST_HEADER include/spdk/config.h 00:03:54.496 TEST_HEADER include/spdk/cpuset.h 00:03:54.496 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:54.496 TEST_HEADER include/spdk/crc16.h 00:03:54.496 CC app/spdk_dd/spdk_dd.o 00:03:54.496 TEST_HEADER include/spdk/crc32.h 00:03:54.496 TEST_HEADER include/spdk/crc64.h 00:03:54.496 CC app/nvmf_tgt/nvmf_main.o 00:03:54.496 TEST_HEADER include/spdk/dif.h 00:03:54.496 TEST_HEADER include/spdk/dma.h 00:03:54.496 TEST_HEADER include/spdk/endian.h 00:03:54.496 CC app/iscsi_tgt/iscsi_tgt.o 00:03:54.496 TEST_HEADER include/spdk/env_dpdk.h 00:03:54.496 TEST_HEADER include/spdk/env.h 00:03:54.496 TEST_HEADER include/spdk/event.h 00:03:54.496 CC app/vhost/vhost.o 00:03:54.496 TEST_HEADER include/spdk/fd_group.h 00:03:54.496 TEST_HEADER include/spdk/fd.h 00:03:54.496 TEST_HEADER include/spdk/file.h 00:03:54.496 TEST_HEADER include/spdk/ftl.h 00:03:54.496 TEST_HEADER include/spdk/gpt_spec.h 00:03:54.496 TEST_HEADER include/spdk/hexlify.h 00:03:54.496 TEST_HEADER include/spdk/histogram_data.h 00:03:54.496 CC app/spdk_tgt/spdk_tgt.o 00:03:54.496 TEST_HEADER include/spdk/idxd.h 00:03:54.496 TEST_HEADER include/spdk/idxd_spec.h 00:03:54.496 TEST_HEADER include/spdk/init.h 00:03:54.496 CC examples/util/zipf/zipf.o 00:03:54.496 CC examples/ioat/perf/perf.o 00:03:54.496 CC examples/nvme/hello_world/hello_world.o 00:03:54.496 TEST_HEADER include/spdk/ioat.h 00:03:54.496 CC examples/vmd/lsvmd/lsvmd.o 00:03:54.496 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:54.496 CC examples/vmd/led/led.o 00:03:54.496 CC test/env/vtophys/vtophys.o 00:03:54.496 CC test/nvme/aer/aer.o 00:03:54.496 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.496 TEST_HEADER include/spdk/ioat_spec.h 00:03:54.496 CC app/fio/nvme/fio_plugin.o 00:03:54.496 TEST_HEADER include/spdk/iscsi_spec.h 00:03:54.496 CC examples/ioat/verify/verify.o 00:03:54.496 CC examples/nvme/reconnect/reconnect.o 00:03:54.496 CC examples/accel/perf/accel_perf.o 00:03:54.496 CC test/event/event_perf/event_perf.o 00:03:54.496 TEST_HEADER include/spdk/json.h 00:03:54.496 TEST_HEADER include/spdk/jsonrpc.h 00:03:54.496 CC examples/nvme/arbitration/arbitration.o 00:03:54.496 CC examples/sock/hello_world/hello_sock.o 00:03:54.496 CC test/thread/poller_perf/poller_perf.o 00:03:54.496 TEST_HEADER include/spdk/keyring.h 00:03:54.496 CC examples/idxd/perf/perf.o 00:03:54.496 TEST_HEADER include/spdk/keyring_module.h 00:03:54.496 TEST_HEADER include/spdk/likely.h 00:03:54.760 TEST_HEADER include/spdk/log.h 00:03:54.760 TEST_HEADER include/spdk/lvol.h 00:03:54.760 TEST_HEADER include/spdk/memory.h 00:03:54.760 TEST_HEADER include/spdk/mmio.h 00:03:54.760 TEST_HEADER include/spdk/nbd.h 00:03:54.760 TEST_HEADER include/spdk/notify.h 00:03:54.760 TEST_HEADER include/spdk/nvme.h 00:03:54.760 TEST_HEADER include/spdk/nvme_intel.h 00:03:54.760 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:54.760 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:54.760 CC test/bdev/bdevio/bdevio.o 00:03:54.760 CC examples/nvmf/nvmf/nvmf.o 00:03:54.760 TEST_HEADER include/spdk/nvme_spec.h 00:03:54.760 CC examples/bdev/hello_world/hello_bdev.o 00:03:54.760 CC app/fio/bdev/fio_plugin.o 00:03:54.760 TEST_HEADER include/spdk/nvme_zns.h 00:03:54.760 CC test/accel/dif/dif.o 00:03:54.760 CC test/blobfs/mkfs/mkfs.o 00:03:54.760 CC examples/bdev/bdevperf/bdevperf.o 00:03:54.760 CC examples/thread/thread/thread_ex.o 00:03:54.760 CC examples/blob/hello_world/hello_blob.o 00:03:54.760 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:54.760 CC test/dma/test_dma/test_dma.o 00:03:54.760 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:54.760 CC test/app/bdev_svc/bdev_svc.o 00:03:54.760 CC examples/blob/cli/blobcli.o 00:03:54.760 TEST_HEADER include/spdk/nvmf.h 00:03:54.760 TEST_HEADER include/spdk/nvmf_spec.h 00:03:54.760 TEST_HEADER include/spdk/nvmf_transport.h 00:03:54.760 TEST_HEADER include/spdk/opal.h 00:03:54.760 TEST_HEADER include/spdk/opal_spec.h 00:03:54.760 TEST_HEADER include/spdk/pci_ids.h 00:03:54.760 TEST_HEADER include/spdk/pipe.h 00:03:54.760 TEST_HEADER include/spdk/queue.h 00:03:54.760 TEST_HEADER include/spdk/reduce.h 00:03:54.760 TEST_HEADER include/spdk/rpc.h 00:03:54.760 TEST_HEADER include/spdk/scheduler.h 00:03:54.760 TEST_HEADER include/spdk/scsi.h 00:03:54.760 TEST_HEADER include/spdk/scsi_spec.h 00:03:54.760 LINK spdk_lspci 00:03:54.760 TEST_HEADER include/spdk/sock.h 00:03:54.760 TEST_HEADER include/spdk/stdinc.h 00:03:54.760 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.760 TEST_HEADER include/spdk/string.h 00:03:54.760 TEST_HEADER include/spdk/thread.h 00:03:54.760 TEST_HEADER include/spdk/trace.h 00:03:54.760 TEST_HEADER include/spdk/trace_parser.h 00:03:54.760 TEST_HEADER include/spdk/tree.h 00:03:54.760 TEST_HEADER include/spdk/ublk.h 00:03:54.760 TEST_HEADER include/spdk/util.h 00:03:54.760 CC test/lvol/esnap/esnap.o 00:03:54.760 TEST_HEADER include/spdk/uuid.h 00:03:54.760 TEST_HEADER include/spdk/version.h 00:03:54.760 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:54.760 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:54.760 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:54.760 TEST_HEADER include/spdk/vhost.h 00:03:54.760 TEST_HEADER include/spdk/vmd.h 00:03:54.760 TEST_HEADER include/spdk/xor.h 00:03:54.760 TEST_HEADER include/spdk/zipf.h 00:03:54.760 CXX test/cpp_headers/accel.o 00:03:54.760 LINK rpc_client_test 00:03:54.760 LINK spdk_nvme_discover 00:03:55.027 LINK lsvmd 00:03:55.027 LINK interrupt_tgt 00:03:55.027 LINK vtophys 00:03:55.027 LINK led 00:03:55.027 LINK nvmf_tgt 00:03:55.027 LINK zipf 00:03:55.027 LINK poller_perf 00:03:55.027 LINK event_perf 00:03:55.027 LINK env_dpdk_post_init 00:03:55.027 LINK vhost 00:03:55.027 LINK spdk_trace_record 00:03:55.027 LINK iscsi_tgt 00:03:55.027 LINK ioat_perf 00:03:55.027 LINK spdk_tgt 00:03:55.027 LINK hello_world 00:03:55.027 CXX test/cpp_headers/accel_module.o 00:03:55.027 LINK bdev_svc 00:03:55.027 LINK verify 00:03:55.027 LINK mkfs 00:03:55.027 LINK hello_sock 00:03:55.293 LINK aer 00:03:55.293 LINK thread 00:03:55.293 LINK hello_bdev 00:03:55.293 LINK hello_blob 00:03:55.293 LINK spdk_dd 00:03:55.293 CXX test/cpp_headers/assert.o 00:03:55.293 LINK arbitration 00:03:55.293 LINK idxd_perf 00:03:55.293 LINK nvmf 00:03:55.293 LINK spdk_trace 00:03:55.293 LINK reconnect 00:03:55.293 CC test/env/memory/memory_ut.o 00:03:55.293 CC test/nvme/reset/reset.o 00:03:55.293 CC test/env/pci/pci_ut.o 00:03:55.293 CC test/event/reactor/reactor.o 00:03:55.293 CC test/app/jsoncat/jsoncat.o 00:03:55.293 CC examples/nvme/hotplug/hotplug.o 00:03:55.293 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:55.293 LINK test_dma 00:03:55.555 CC test/app/histogram_perf/histogram_perf.o 00:03:55.556 CC test/event/reactor_perf/reactor_perf.o 00:03:55.556 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:55.556 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:55.556 LINK bdevio 00:03:55.556 CXX test/cpp_headers/barrier.o 00:03:55.556 CC examples/nvme/abort/abort.o 00:03:55.556 CC test/app/stub/stub.o 00:03:55.556 CC test/nvme/e2edp/nvme_dp.o 00:03:55.556 LINK accel_perf 00:03:55.556 LINK dif 00:03:55.556 CC test/nvme/sgl/sgl.o 00:03:55.556 CC test/nvme/overhead/overhead.o 00:03:55.556 LINK nvme_manage 00:03:55.556 LINK nvme_fuzz 00:03:55.556 CXX test/cpp_headers/base64.o 00:03:55.556 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:55.556 CC test/event/app_repeat/app_repeat.o 00:03:55.556 CXX test/cpp_headers/bdev.o 00:03:55.556 CC test/nvme/startup/startup.o 00:03:55.556 CC test/nvme/err_injection/err_injection.o 00:03:55.556 LINK blobcli 00:03:55.556 CXX test/cpp_headers/bdev_module.o 00:03:55.556 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:55.556 LINK spdk_nvme 00:03:55.821 LINK spdk_bdev 00:03:55.821 LINK reactor 00:03:55.821 CXX test/cpp_headers/bdev_zone.o 00:03:55.821 CC test/nvme/reserve/reserve.o 00:03:55.821 LINK jsoncat 00:03:55.821 CC test/event/scheduler/scheduler.o 00:03:55.821 CXX test/cpp_headers/bit_array.o 00:03:55.821 LINK reactor_perf 00:03:55.821 LINK histogram_perf 00:03:55.821 CC test/nvme/simple_copy/simple_copy.o 00:03:55.821 CC test/nvme/boot_partition/boot_partition.o 00:03:55.821 CC test/nvme/connect_stress/connect_stress.o 00:03:55.821 CC test/nvme/compliance/nvme_compliance.o 00:03:55.821 LINK cmb_copy 00:03:55.821 CC test/nvme/fused_ordering/fused_ordering.o 00:03:55.821 CXX test/cpp_headers/bit_pool.o 00:03:55.821 CXX test/cpp_headers/blob_bdev.o 00:03:55.821 CXX test/cpp_headers/blobfs_bdev.o 00:03:55.821 LINK reset 00:03:55.821 LINK stub 00:03:55.821 CXX test/cpp_headers/blobfs.o 00:03:55.821 LINK hotplug 00:03:55.821 CXX test/cpp_headers/blob.o 00:03:55.821 LINK app_repeat 00:03:56.082 CXX test/cpp_headers/conf.o 00:03:56.082 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:56.082 CXX test/cpp_headers/config.o 00:03:56.082 LINK mem_callbacks 00:03:56.082 CC test/nvme/fdp/fdp.o 00:03:56.082 CXX test/cpp_headers/cpuset.o 00:03:56.082 CXX test/cpp_headers/crc16.o 00:03:56.082 CC test/nvme/cuse/cuse.o 00:03:56.082 LINK spdk_nvme_perf 00:03:56.082 LINK startup 00:03:56.082 CXX test/cpp_headers/crc32.o 00:03:56.082 CXX test/cpp_headers/crc64.o 00:03:56.082 LINK err_injection 00:03:56.082 CXX test/cpp_headers/dif.o 00:03:56.082 CXX test/cpp_headers/dma.o 00:03:56.082 LINK pmr_persistence 00:03:56.082 LINK spdk_nvme_identify 00:03:56.082 CXX test/cpp_headers/endian.o 00:03:56.082 CXX test/cpp_headers/env_dpdk.o 00:03:56.082 LINK nvme_dp 00:03:56.082 LINK sgl 00:03:56.082 CXX test/cpp_headers/env.o 00:03:56.082 CXX test/cpp_headers/event.o 00:03:56.082 CXX test/cpp_headers/fd_group.o 00:03:56.082 LINK overhead 00:03:56.082 LINK boot_partition 00:03:56.082 LINK reserve 00:03:56.082 LINK connect_stress 00:03:56.082 LINK pci_ut 00:03:56.082 LINK scheduler 00:03:56.082 LINK bdevperf 00:03:56.082 LINK fused_ordering 00:03:56.082 LINK abort 00:03:56.082 CXX test/cpp_headers/fd.o 00:03:56.082 LINK spdk_top 00:03:56.351 CXX test/cpp_headers/file.o 00:03:56.351 LINK simple_copy 00:03:56.351 CXX test/cpp_headers/ftl.o 00:03:56.351 CXX test/cpp_headers/gpt_spec.o 00:03:56.351 CXX test/cpp_headers/hexlify.o 00:03:56.351 CXX test/cpp_headers/histogram_data.o 00:03:56.351 CXX test/cpp_headers/idxd.o 00:03:56.351 CXX test/cpp_headers/idxd_spec.o 00:03:56.351 CXX test/cpp_headers/init.o 00:03:56.351 CXX test/cpp_headers/ioat.o 00:03:56.351 CXX test/cpp_headers/ioat_spec.o 00:03:56.351 CXX test/cpp_headers/iscsi_spec.o 00:03:56.351 CXX test/cpp_headers/json.o 00:03:56.351 CXX test/cpp_headers/jsonrpc.o 00:03:56.351 LINK doorbell_aers 00:03:56.351 CXX test/cpp_headers/keyring.o 00:03:56.351 LINK vhost_fuzz 00:03:56.351 CXX test/cpp_headers/keyring_module.o 00:03:56.351 CXX test/cpp_headers/likely.o 00:03:56.351 CXX test/cpp_headers/log.o 00:03:56.351 CXX test/cpp_headers/lvol.o 00:03:56.351 CXX test/cpp_headers/memory.o 00:03:56.351 CXX test/cpp_headers/mmio.o 00:03:56.351 CXX test/cpp_headers/nbd.o 00:03:56.351 CXX test/cpp_headers/notify.o 00:03:56.351 CXX test/cpp_headers/nvme.o 00:03:56.351 LINK nvme_compliance 00:03:56.351 CXX test/cpp_headers/nvme_intel.o 00:03:56.351 CXX test/cpp_headers/nvme_ocssd.o 00:03:56.351 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:56.351 CXX test/cpp_headers/nvme_spec.o 00:03:56.351 CXX test/cpp_headers/nvme_zns.o 00:03:56.351 CXX test/cpp_headers/nvmf_cmd.o 00:03:56.613 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:56.613 CXX test/cpp_headers/nvmf_spec.o 00:03:56.613 CXX test/cpp_headers/nvmf.o 00:03:56.613 CXX test/cpp_headers/nvmf_transport.o 00:03:56.613 CXX test/cpp_headers/opal.o 00:03:56.613 CXX test/cpp_headers/opal_spec.o 00:03:56.613 CXX test/cpp_headers/pci_ids.o 00:03:56.613 CXX test/cpp_headers/pipe.o 00:03:56.613 CXX test/cpp_headers/queue.o 00:03:56.613 CXX test/cpp_headers/reduce.o 00:03:56.613 CXX test/cpp_headers/rpc.o 00:03:56.613 CXX test/cpp_headers/scheduler.o 00:03:56.613 CXX test/cpp_headers/scsi.o 00:03:56.613 CXX test/cpp_headers/scsi_spec.o 00:03:56.613 CXX test/cpp_headers/sock.o 00:03:56.613 CXX test/cpp_headers/stdinc.o 00:03:56.613 CXX test/cpp_headers/string.o 00:03:56.613 CXX test/cpp_headers/thread.o 00:03:56.613 CXX test/cpp_headers/trace.o 00:03:56.613 CXX test/cpp_headers/trace_parser.o 00:03:56.613 CXX test/cpp_headers/tree.o 00:03:56.613 CXX test/cpp_headers/ublk.o 00:03:56.613 CXX test/cpp_headers/util.o 00:03:56.613 LINK fdp 00:03:56.613 CXX test/cpp_headers/uuid.o 00:03:56.876 CXX test/cpp_headers/version.o 00:03:56.876 CXX test/cpp_headers/vfio_user_pci.o 00:03:56.876 CXX test/cpp_headers/vfio_user_spec.o 00:03:56.876 CXX test/cpp_headers/vhost.o 00:03:56.876 CXX test/cpp_headers/vmd.o 00:03:56.876 CXX test/cpp_headers/xor.o 00:03:56.876 CXX test/cpp_headers/zipf.o 00:03:57.441 LINK memory_ut 00:03:57.699 LINK iscsi_fuzz 00:03:57.699 LINK cuse 00:04:00.993 LINK esnap 00:04:00.993 00:04:00.993 real 0m40.878s 00:04:00.993 user 7m34.159s 00:04:00.993 sys 1m50.168s 00:04:00.993 03:03:27 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:00.993 03:03:27 make -- common/autotest_common.sh@10 -- $ set +x 00:04:00.993 ************************************ 00:04:00.993 END TEST make 00:04:00.993 ************************************ 00:04:00.993 03:03:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:00.993 03:03:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:00.993 03:03:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:00.993 03:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.993 03:03:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:00.993 03:03:27 -- pm/common@44 -- $ pid=199268 00:04:00.993 03:03:27 -- pm/common@50 -- $ kill -TERM 199268 00:04:00.993 03:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.993 03:03:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:00.993 03:03:27 -- pm/common@44 -- $ pid=199270 00:04:00.993 03:03:27 -- pm/common@50 -- $ kill -TERM 199270 00:04:00.993 03:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.993 03:03:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:00.993 03:03:27 -- pm/common@44 -- $ pid=199272 00:04:00.993 03:03:27 -- pm/common@50 -- $ kill -TERM 199272 00:04:00.993 03:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.993 03:03:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:00.993 03:03:27 -- pm/common@44 -- $ pid=199301 00:04:00.993 03:03:27 -- pm/common@50 -- $ sudo -E kill -TERM 199301 00:04:00.993 03:03:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:00.993 03:03:27 -- nvmf/common.sh@7 -- # uname -s 00:04:00.993 03:03:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:00.993 03:03:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:00.993 03:03:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:00.993 03:03:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:00.993 03:03:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:00.993 03:03:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:00.993 03:03:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:00.993 03:03:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:00.993 03:03:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:00.993 03:03:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:00.993 03:03:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:00.993 03:03:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:00.993 03:03:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:00.993 03:03:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:00.993 03:03:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:00.993 03:03:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:00.993 03:03:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:00.993 03:03:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:00.993 03:03:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:00.993 03:03:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:00.993 03:03:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.993 03:03:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.993 03:03:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.993 03:03:27 -- paths/export.sh@5 -- # export PATH 00:04:00.993 03:03:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.993 03:03:27 -- nvmf/common.sh@47 -- # : 0 00:04:00.993 03:03:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:00.993 03:03:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:00.993 03:03:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:00.993 03:03:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:00.993 03:03:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:00.993 03:03:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:00.993 03:03:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:00.993 03:03:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:00.993 03:03:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:00.993 03:03:27 -- spdk/autotest.sh@32 -- # uname -s 00:04:00.993 03:03:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:00.993 03:03:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:00.993 03:03:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:00.993 03:03:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:00.993 03:03:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:00.993 03:03:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:00.993 03:03:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:00.993 03:03:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:00.993 03:03:27 -- spdk/autotest.sh@48 -- # udevadm_pid=275616 00:04:00.993 03:03:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:00.993 03:03:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:00.993 03:03:27 -- pm/common@17 -- # local monitor 00:04:00.993 03:03:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.993 03:03:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.993 03:03:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.993 03:03:27 -- pm/common@21 -- # date +%s 00:04:00.993 03:03:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.993 03:03:27 -- pm/common@21 -- # date +%s 00:04:00.993 03:03:27 -- pm/common@25 -- # sleep 1 00:04:00.993 03:03:27 -- pm/common@21 -- # date +%s 00:04:00.993 03:03:27 -- pm/common@21 -- # date +%s 00:04:00.993 03:03:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721696607 00:04:00.993 03:03:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721696607 00:04:00.993 03:03:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721696607 00:04:00.993 03:03:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721696607 00:04:00.993 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721696607_collect-vmstat.pm.log 00:04:00.993 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721696607_collect-cpu-load.pm.log 00:04:00.993 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721696607_collect-cpu-temp.pm.log 00:04:00.993 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721696607_collect-bmc-pm.bmc.pm.log 00:04:02.370 03:03:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:02.370 03:03:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:02.370 03:03:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:02.370 03:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.370 03:03:28 -- spdk/autotest.sh@59 -- # create_test_list 00:04:02.370 03:03:28 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:02.370 03:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.370 03:03:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:02.370 03:03:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:02.370 03:03:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:02.370 03:03:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:02.370 03:03:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:02.370 03:03:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:02.370 03:03:28 -- common/autotest_common.sh@1451 -- # uname 00:04:02.370 03:03:28 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:02.370 03:03:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:02.370 03:03:28 -- common/autotest_common.sh@1471 -- # uname 00:04:02.370 03:03:28 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:02.370 03:03:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:02.370 03:03:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:02.370 03:03:28 -- spdk/autotest.sh@72 -- # hash lcov 00:04:02.370 03:03:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:02.370 03:03:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:02.370 --rc lcov_branch_coverage=1 00:04:02.370 --rc lcov_function_coverage=1 00:04:02.370 --rc genhtml_branch_coverage=1 00:04:02.370 --rc genhtml_function_coverage=1 00:04:02.370 --rc genhtml_legend=1 00:04:02.370 --rc geninfo_all_blocks=1 00:04:02.370 ' 00:04:02.370 03:03:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:02.370 --rc lcov_branch_coverage=1 00:04:02.370 --rc lcov_function_coverage=1 00:04:02.370 --rc genhtml_branch_coverage=1 00:04:02.370 --rc genhtml_function_coverage=1 00:04:02.370 --rc genhtml_legend=1 00:04:02.370 --rc geninfo_all_blocks=1 00:04:02.370 ' 00:04:02.370 03:03:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:02.370 --rc lcov_branch_coverage=1 00:04:02.370 --rc lcov_function_coverage=1 00:04:02.370 --rc genhtml_branch_coverage=1 00:04:02.370 --rc genhtml_function_coverage=1 00:04:02.370 --rc genhtml_legend=1 00:04:02.370 --rc geninfo_all_blocks=1 00:04:02.370 --no-external' 00:04:02.370 03:03:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:02.370 --rc lcov_branch_coverage=1 00:04:02.370 --rc lcov_function_coverage=1 00:04:02.370 --rc genhtml_branch_coverage=1 00:04:02.370 --rc genhtml_function_coverage=1 00:04:02.370 --rc genhtml_legend=1 00:04:02.370 --rc geninfo_all_blocks=1 00:04:02.370 --no-external' 00:04:02.370 03:03:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:02.370 lcov: LCOV version 1.14 00:04:02.370 03:03:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:17.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:17.241 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:35.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:35.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:35.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:35.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:35.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:35.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:35.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:35.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:35.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:35.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:35.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:35.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:35.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:35.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:35.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:35.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:35.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:35.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:39.528 03:04:05 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:39.528 03:04:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:39.528 03:04:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.528 03:04:05 -- spdk/autotest.sh@91 -- # rm -f 00:04:39.528 03:04:05 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.463 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:40.463 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:40.463 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:40.463 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:40.463 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:40.463 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:40.463 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:40.463 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:40.463 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:40.463 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:40.463 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:40.463 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:40.463 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:40.463 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:40.463 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:40.463 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:40.463 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:40.722 03:04:07 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:40.722 03:04:07 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:40.722 03:04:07 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:40.722 03:04:07 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:40.722 03:04:07 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:40.722 03:04:07 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:40.722 03:04:07 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:40.722 03:04:07 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.722 03:04:07 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:40.722 03:04:07 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:40.722 03:04:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:40.722 03:04:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:40.722 03:04:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:40.722 03:04:07 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:40.722 03:04:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:40.722 No valid GPT data, bailing 00:04:40.722 03:04:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.722 03:04:07 -- scripts/common.sh@391 -- # pt= 00:04:40.722 03:04:07 -- scripts/common.sh@392 -- # return 1 00:04:40.722 03:04:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:40.722 1+0 records in 00:04:40.722 1+0 records out 00:04:40.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00196795 s, 533 MB/s 00:04:40.722 03:04:07 -- spdk/autotest.sh@118 -- # sync 00:04:40.722 03:04:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:40.722 03:04:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:40.722 03:04:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:42.625 03:04:08 -- spdk/autotest.sh@124 -- # uname -s 00:04:42.625 03:04:08 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:42.625 03:04:08 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:42.625 03:04:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.625 03:04:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.625 03:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:42.625 ************************************ 00:04:42.625 START TEST setup.sh 00:04:42.625 ************************************ 00:04:42.625 03:04:08 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:42.625 * Looking for test storage... 00:04:42.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:42.625 03:04:09 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:42.625 03:04:09 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:42.625 03:04:09 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:42.625 03:04:09 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.625 03:04:09 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.625 03:04:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.625 ************************************ 00:04:42.625 START TEST acl 00:04:42.625 ************************************ 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:42.625 * Looking for test storage... 00:04:42.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:42.625 03:04:09 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.625 03:04:09 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:42.625 03:04:09 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:42.625 03:04:09 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:42.625 03:04:09 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:42.625 03:04:09 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:42.625 03:04:09 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:42.625 03:04:09 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.625 03:04:09 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.000 03:04:10 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:44.000 03:04:10 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:44.000 03:04:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.000 03:04:10 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:44.000 03:04:10 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.000 03:04:10 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:44.934 Hugepages 00:04:44.934 node hugesize free / total 00:04:44.934 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:44.934 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:44.934 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.934 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:44.934 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:44.934 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 00:04:45.193 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:45.193 03:04:11 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:45.193 03:04:11 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.193 03:04:11 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.193 03:04:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:45.193 ************************************ 00:04:45.193 START TEST denied 00:04:45.193 ************************************ 00:04:45.193 03:04:11 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:45.193 03:04:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:45.193 03:04:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:45.193 03:04:11 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:45.193 03:04:11 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.193 03:04:11 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.092 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.092 03:04:13 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.994 00:04:48.994 real 0m3.831s 00:04:48.994 user 0m1.101s 00:04:48.994 sys 0m1.813s 00:04:48.994 03:04:15 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.994 03:04:15 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:48.994 ************************************ 00:04:48.994 END TEST denied 00:04:48.994 ************************************ 00:04:48.994 03:04:15 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:48.994 03:04:15 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.994 03:04:15 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.994 03:04:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:48.994 ************************************ 00:04:48.994 START TEST allowed 00:04:48.994 ************************************ 00:04:48.994 03:04:15 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:48.994 03:04:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:48.994 03:04:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:48.994 03:04:15 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:48.994 03:04:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.994 03:04:15 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.554 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:51.554 03:04:17 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:51.554 03:04:17 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:51.554 03:04:17 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:51.554 03:04:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.554 03:04:17 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.930 00:04:52.930 real 0m3.676s 00:04:52.930 user 0m0.947s 00:04:52.930 sys 0m1.565s 00:04:52.930 03:04:19 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.930 03:04:19 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:52.930 ************************************ 00:04:52.930 END TEST allowed 00:04:52.930 ************************************ 00:04:52.930 00:04:52.930 real 0m10.197s 00:04:52.930 user 0m3.123s 00:04:52.930 sys 0m5.051s 00:04:52.930 03:04:19 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.930 03:04:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:52.930 ************************************ 00:04:52.930 END TEST acl 00:04:52.930 ************************************ 00:04:52.930 03:04:19 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:52.930 03:04:19 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.930 03:04:19 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.930 03:04:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.930 ************************************ 00:04:52.930 START TEST hugepages 00:04:52.930 ************************************ 00:04:52.930 03:04:19 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:52.930 * Looking for test storage... 00:04:52.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41223436 kB' 'MemAvailable: 44736204 kB' 'Buffers: 3736 kB' 'Cached: 12772732 kB' 'SwapCached: 0 kB' 'Active: 9718180 kB' 'Inactive: 3509488 kB' 'Active(anon): 9323132 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454364 kB' 'Mapped: 200704 kB' 'Shmem: 8871932 kB' 'KReclaimable: 203712 kB' 'Slab: 590476 kB' 'SReclaimable: 203712 kB' 'SUnreclaim: 386764 kB' 'KernelStack: 12800 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10472132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.930 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.931 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:52.932 03:04:19 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:52.932 03:04:19 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.932 03:04:19 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.932 03:04:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.932 ************************************ 00:04:52.932 START TEST default_setup 00:04:52.932 ************************************ 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.932 03:04:19 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.310 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:54.310 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:54.310 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:54.310 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:54.310 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:54.310 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:54.310 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:54.310 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:54.310 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:54.310 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:54.310 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:54.310 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:54.310 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:54.310 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:54.310 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:54.310 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:55.249 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43336144 kB' 'MemAvailable: 46849048 kB' 'Buffers: 3736 kB' 'Cached: 12772832 kB' 'SwapCached: 0 kB' 'Active: 9730420 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335372 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466616 kB' 'Mapped: 200088 kB' 'Shmem: 8872032 kB' 'KReclaimable: 203984 kB' 'Slab: 590356 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386372 kB' 'KernelStack: 12704 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10486492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196936 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.249 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.250 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43342084 kB' 'MemAvailable: 46854988 kB' 'Buffers: 3736 kB' 'Cached: 12772832 kB' 'SwapCached: 0 kB' 'Active: 9730108 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335060 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466268 kB' 'Mapped: 200004 kB' 'Shmem: 8872032 kB' 'KReclaimable: 203984 kB' 'Slab: 590356 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386372 kB' 'KernelStack: 12688 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10486508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196920 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.251 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:55.252 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43342004 kB' 'MemAvailable: 46854908 kB' 'Buffers: 3736 kB' 'Cached: 12772852 kB' 'SwapCached: 0 kB' 'Active: 9730020 kB' 'Inactive: 3509488 kB' 'Active(anon): 9334972 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466144 kB' 'Mapped: 199928 kB' 'Shmem: 8872052 kB' 'KReclaimable: 203984 kB' 'Slab: 590320 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386336 kB' 'KernelStack: 12704 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10486532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196904 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.514 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.515 nr_hugepages=1024 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.515 resv_hugepages=0 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.515 surplus_hugepages=0 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.515 anon_hugepages=0 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43343836 kB' 'MemAvailable: 46856740 kB' 'Buffers: 3736 kB' 'Cached: 12772872 kB' 'SwapCached: 0 kB' 'Active: 9729976 kB' 'Inactive: 3509488 kB' 'Active(anon): 9334928 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466112 kB' 'Mapped: 199928 kB' 'Shmem: 8872072 kB' 'KReclaimable: 203984 kB' 'Slab: 590320 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386336 kB' 'KernelStack: 12688 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10486552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196904 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.515 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26755528 kB' 'MemUsed: 6074356 kB' 'SwapCached: 0 kB' 'Active: 2658528 kB' 'Inactive: 155448 kB' 'Active(anon): 2497236 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2591712 kB' 'Mapped: 67628 kB' 'AnonPages: 225440 kB' 'Shmem: 2274972 kB' 'KernelStack: 6728 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100052 kB' 'Slab: 333292 kB' 'SReclaimable: 100052 kB' 'SUnreclaim: 233240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.517 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.519 node0=1024 expecting 1024 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.519 00:04:55.519 real 0m2.494s 00:04:55.519 user 0m0.702s 00:04:55.519 sys 0m0.904s 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.519 03:04:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:55.519 ************************************ 00:04:55.519 END TEST default_setup 00:04:55.519 ************************************ 00:04:55.519 03:04:21 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:55.519 03:04:21 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.519 03:04:21 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.519 03:04:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.519 ************************************ 00:04:55.519 START TEST per_node_1G_alloc 00:04:55.519 ************************************ 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.519 03:04:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.452 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.452 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.452 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.452 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.713 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.713 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.713 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.713 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.713 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.713 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.713 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.713 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.713 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.713 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.713 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.713 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.713 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43316356 kB' 'MemAvailable: 46829260 kB' 'Buffers: 3736 kB' 'Cached: 12772948 kB' 'SwapCached: 0 kB' 'Active: 9730464 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335416 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466536 kB' 'Mapped: 199936 kB' 'Shmem: 8872148 kB' 'KReclaimable: 203984 kB' 'Slab: 590228 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386244 kB' 'KernelStack: 12688 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10486740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.713 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.714 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43316372 kB' 'MemAvailable: 46829276 kB' 'Buffers: 3736 kB' 'Cached: 12772948 kB' 'SwapCached: 0 kB' 'Active: 9730060 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335012 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466076 kB' 'Mapped: 199936 kB' 'Shmem: 8872148 kB' 'KReclaimable: 203984 kB' 'Slab: 590188 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386204 kB' 'KernelStack: 12704 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10486756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196936 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.715 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.716 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43316124 kB' 'MemAvailable: 46829028 kB' 'Buffers: 3736 kB' 'Cached: 12772972 kB' 'SwapCached: 0 kB' 'Active: 9730280 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335232 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466340 kB' 'Mapped: 199936 kB' 'Shmem: 8872172 kB' 'KReclaimable: 203984 kB' 'Slab: 590272 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386288 kB' 'KernelStack: 12736 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10486780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196936 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.717 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.718 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.978 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.979 nr_hugepages=1024 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.979 resv_hugepages=0 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.979 surplus_hugepages=0 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.979 anon_hugepages=0 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43313884 kB' 'MemAvailable: 46826788 kB' 'Buffers: 3736 kB' 'Cached: 12772996 kB' 'SwapCached: 0 kB' 'Active: 9730268 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335220 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466304 kB' 'Mapped: 199936 kB' 'Shmem: 8872196 kB' 'KReclaimable: 203984 kB' 'Slab: 590272 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386288 kB' 'KernelStack: 12720 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10486804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196936 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.979 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.980 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27778488 kB' 'MemUsed: 5051396 kB' 'SwapCached: 0 kB' 'Active: 2657792 kB' 'Inactive: 155448 kB' 'Active(anon): 2496500 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2591716 kB' 'Mapped: 67628 kB' 'AnonPages: 224640 kB' 'Shmem: 2274976 kB' 'KernelStack: 6696 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100052 kB' 'Slab: 333140 kB' 'SReclaimable: 100052 kB' 'SUnreclaim: 233088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.981 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15535776 kB' 'MemUsed: 12176048 kB' 'SwapCached: 0 kB' 'Active: 7072280 kB' 'Inactive: 3354040 kB' 'Active(anon): 6838524 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3354040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10185040 kB' 'Mapped: 132308 kB' 'AnonPages: 241432 kB' 'Shmem: 6597244 kB' 'KernelStack: 6040 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103932 kB' 'Slab: 257132 kB' 'SReclaimable: 103932 kB' 'SUnreclaim: 153200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.982 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.983 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.984 node0=512 expecting 512 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:56.984 node1=512 expecting 512 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:56.984 00:04:56.984 real 0m1.423s 00:04:56.984 user 0m0.581s 00:04:56.984 sys 0m0.804s 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.984 03:04:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:56.984 ************************************ 00:04:56.984 END TEST per_node_1G_alloc 00:04:56.984 ************************************ 00:04:56.984 03:04:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:56.984 03:04:23 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.984 03:04:23 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.984 03:04:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.984 ************************************ 00:04:56.984 START TEST even_2G_alloc 00:04:56.984 ************************************ 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.984 03:04:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.916 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.916 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.916 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.916 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.916 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.916 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.916 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.916 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.916 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.916 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.916 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.916 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.916 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.916 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.916 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.916 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.916 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43324884 kB' 'MemAvailable: 46837788 kB' 'Buffers: 3736 kB' 'Cached: 12773080 kB' 'SwapCached: 0 kB' 'Active: 9730404 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335356 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466236 kB' 'Mapped: 200020 kB' 'Shmem: 8872280 kB' 'KReclaimable: 203984 kB' 'Slab: 590300 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386316 kB' 'KernelStack: 12736 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10487028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.205 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43334212 kB' 'MemAvailable: 46847116 kB' 'Buffers: 3736 kB' 'Cached: 12773084 kB' 'SwapCached: 0 kB' 'Active: 9730376 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335328 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466244 kB' 'Mapped: 199952 kB' 'Shmem: 8872284 kB' 'KReclaimable: 203984 kB' 'Slab: 590256 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386272 kB' 'KernelStack: 12736 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10487044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.206 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.207 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43334032 kB' 'MemAvailable: 46846936 kB' 'Buffers: 3736 kB' 'Cached: 12773100 kB' 'SwapCached: 0 kB' 'Active: 9730664 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335616 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466456 kB' 'Mapped: 199952 kB' 'Shmem: 8872300 kB' 'KReclaimable: 203984 kB' 'Slab: 590364 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386380 kB' 'KernelStack: 12752 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10487068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.208 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.210 nr_hugepages=1024 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.210 resv_hugepages=0 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.210 surplus_hugepages=0 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.210 anon_hugepages=0 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43334032 kB' 'MemAvailable: 46846936 kB' 'Buffers: 3736 kB' 'Cached: 12773120 kB' 'SwapCached: 0 kB' 'Active: 9730660 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335612 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466456 kB' 'Mapped: 199952 kB' 'Shmem: 8872320 kB' 'KReclaimable: 203984 kB' 'Slab: 590364 kB' 'SReclaimable: 203984 kB' 'SUnreclaim: 386380 kB' 'KernelStack: 12752 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10487088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.211 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27789264 kB' 'MemUsed: 5040620 kB' 'SwapCached: 0 kB' 'Active: 2658232 kB' 'Inactive: 155448 kB' 'Active(anon): 2496940 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2591776 kB' 'Mapped: 67628 kB' 'AnonPages: 225044 kB' 'Shmem: 2275036 kB' 'KernelStack: 6696 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100052 kB' 'Slab: 332936 kB' 'SReclaimable: 100052 kB' 'SUnreclaim: 232884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.213 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15544768 kB' 'MemUsed: 12167056 kB' 'SwapCached: 0 kB' 'Active: 7072752 kB' 'Inactive: 3354040 kB' 'Active(anon): 6838996 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3354040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10185100 kB' 'Mapped: 132324 kB' 'AnonPages: 241704 kB' 'Shmem: 6597304 kB' 'KernelStack: 6056 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103932 kB' 'Slab: 257428 kB' 'SReclaimable: 103932 kB' 'SUnreclaim: 153496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.214 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:58.215 node0=512 expecting 512 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:58.215 node1=512 expecting 512 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:58.215 00:04:58.215 real 0m1.362s 00:04:58.215 user 0m0.612s 00:04:58.215 sys 0m0.709s 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.215 03:04:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:58.215 ************************************ 00:04:58.215 END TEST even_2G_alloc 00:04:58.215 ************************************ 00:04:58.473 03:04:24 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:58.473 03:04:24 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.473 03:04:24 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.473 03:04:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.473 ************************************ 00:04:58.473 START TEST odd_alloc 00:04:58.473 ************************************ 00:04:58.473 03:04:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:58.473 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:58.473 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:58.473 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.473 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.473 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:58.473 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.474 03:04:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.408 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.408 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.408 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.408 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.408 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.408 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.408 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.408 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.408 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.408 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.408 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.408 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.408 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.408 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.408 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.408 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.408 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43320204 kB' 'MemAvailable: 46833100 kB' 'Buffers: 3736 kB' 'Cached: 12773212 kB' 'SwapCached: 0 kB' 'Active: 9727736 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332688 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463520 kB' 'Mapped: 199036 kB' 'Shmem: 8872412 kB' 'KReclaimable: 203968 kB' 'Slab: 590432 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386464 kB' 'KernelStack: 12720 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10473352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.672 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43320204 kB' 'MemAvailable: 46833100 kB' 'Buffers: 3736 kB' 'Cached: 12773212 kB' 'SwapCached: 0 kB' 'Active: 9727600 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332552 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463364 kB' 'Mapped: 198964 kB' 'Shmem: 8872412 kB' 'KReclaimable: 203968 kB' 'Slab: 590416 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386448 kB' 'KernelStack: 12672 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10473368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43319952 kB' 'MemAvailable: 46832848 kB' 'Buffers: 3736 kB' 'Cached: 12773232 kB' 'SwapCached: 0 kB' 'Active: 9727720 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332672 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463480 kB' 'Mapped: 198884 kB' 'Shmem: 8872432 kB' 'KReclaimable: 203968 kB' 'Slab: 590404 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386436 kB' 'KernelStack: 12704 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10473388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:59.677 nr_hugepages=1025 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.677 resv_hugepages=0 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.677 surplus_hugepages=0 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.677 anon_hugepages=0 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43319012 kB' 'MemAvailable: 46831908 kB' 'Buffers: 3736 kB' 'Cached: 12773252 kB' 'SwapCached: 0 kB' 'Active: 9731032 kB' 'Inactive: 3509488 kB' 'Active(anon): 9335984 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466812 kB' 'Mapped: 199320 kB' 'Shmem: 8872452 kB' 'KReclaimable: 203968 kB' 'Slab: 590380 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386412 kB' 'KernelStack: 12736 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10478144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196952 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.678 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27767468 kB' 'MemUsed: 5062416 kB' 'SwapCached: 0 kB' 'Active: 2661936 kB' 'Inactive: 155448 kB' 'Active(anon): 2500644 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2591840 kB' 'Mapped: 66712 kB' 'AnonPages: 228676 kB' 'Shmem: 2275100 kB' 'KernelStack: 6616 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100052 kB' 'Slab: 332960 kB' 'SReclaimable: 100052 kB' 'SUnreclaim: 232908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.679 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.680 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15544868 kB' 'MemUsed: 12166956 kB' 'SwapCached: 0 kB' 'Active: 7071756 kB' 'Inactive: 3354040 kB' 'Active(anon): 6838000 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3354040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10185168 kB' 'Mapped: 132560 kB' 'AnonPages: 240804 kB' 'Shmem: 6597372 kB' 'KernelStack: 6120 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103916 kB' 'Slab: 257420 kB' 'SReclaimable: 103916 kB' 'SUnreclaim: 153504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.681 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:59.682 node0=512 expecting 513 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:59.682 node1=513 expecting 512 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:59.682 00:04:59.682 real 0m1.367s 00:04:59.682 user 0m0.535s 00:04:59.682 sys 0m0.784s 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.682 03:04:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.682 ************************************ 00:04:59.682 END TEST odd_alloc 00:04:59.682 ************************************ 00:04:59.682 03:04:26 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:59.682 03:04:26 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.682 03:04:26 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.682 03:04:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.682 ************************************ 00:04:59.682 START TEST custom_alloc 00:04:59.682 ************************************ 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:59.682 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:59.683 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:59.941 03:04:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.942 03:04:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.877 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:00.877 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:00.877 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:00.877 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:00.877 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:00.877 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:00.877 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:00.877 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:00.877 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:00.877 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:00.877 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:00.877 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:00.877 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:00.877 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:00.877 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:00.877 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:00.877 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42266712 kB' 'MemAvailable: 45779608 kB' 'Buffers: 3736 kB' 'Cached: 12773340 kB' 'SwapCached: 0 kB' 'Active: 9727680 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332632 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463352 kB' 'Mapped: 199000 kB' 'Shmem: 8872540 kB' 'KReclaimable: 203968 kB' 'Slab: 590056 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386088 kB' 'KernelStack: 12688 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10473372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.142 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42265948 kB' 'MemAvailable: 45778844 kB' 'Buffers: 3736 kB' 'Cached: 12773340 kB' 'SwapCached: 0 kB' 'Active: 9727776 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332728 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463456 kB' 'Mapped: 198896 kB' 'Shmem: 8872540 kB' 'KReclaimable: 203968 kB' 'Slab: 590056 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386088 kB' 'KernelStack: 12688 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10473392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.143 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.144 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42266808 kB' 'MemAvailable: 45779704 kB' 'Buffers: 3736 kB' 'Cached: 12773364 kB' 'SwapCached: 0 kB' 'Active: 9727884 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332836 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463592 kB' 'Mapped: 198896 kB' 'Shmem: 8872564 kB' 'KReclaimable: 203968 kB' 'Slab: 590032 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386064 kB' 'KernelStack: 12688 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10473784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.145 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.146 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:01.147 nr_hugepages=1536 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.147 resv_hugepages=0 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.147 surplus_hugepages=0 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.147 anon_hugepages=0 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.147 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42267008 kB' 'MemAvailable: 45779904 kB' 'Buffers: 3736 kB' 'Cached: 12773388 kB' 'SwapCached: 0 kB' 'Active: 9728008 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332960 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463668 kB' 'Mapped: 198896 kB' 'Shmem: 8872588 kB' 'KReclaimable: 203968 kB' 'Slab: 590032 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386064 kB' 'KernelStack: 12720 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10473804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.148 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27773736 kB' 'MemUsed: 5056148 kB' 'SwapCached: 0 kB' 'Active: 2657336 kB' 'Inactive: 155448 kB' 'Active(anon): 2496044 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2591924 kB' 'Mapped: 66560 kB' 'AnonPages: 224092 kB' 'Shmem: 2275184 kB' 'KernelStack: 6648 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100052 kB' 'Slab: 332840 kB' 'SReclaimable: 100052 kB' 'SUnreclaim: 232788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.149 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.150 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 14494580 kB' 'MemUsed: 13217244 kB' 'SwapCached: 0 kB' 'Active: 7070724 kB' 'Inactive: 3354040 kB' 'Active(anon): 6836968 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3354040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10185220 kB' 'Mapped: 132336 kB' 'AnonPages: 239576 kB' 'Shmem: 6597424 kB' 'KernelStack: 6072 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103916 kB' 'Slab: 257192 kB' 'SReclaimable: 103916 kB' 'SUnreclaim: 153276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.151 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:01.152 node0=512 expecting 512 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:01.152 node1=1024 expecting 1024 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:01.152 00:05:01.152 real 0m1.403s 00:05:01.152 user 0m0.568s 00:05:01.152 sys 0m0.795s 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.152 03:04:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:01.152 ************************************ 00:05:01.152 END TEST custom_alloc 00:05:01.152 ************************************ 00:05:01.152 03:04:27 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:01.152 03:04:27 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.152 03:04:27 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.152 03:04:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.152 ************************************ 00:05:01.152 START TEST no_shrink_alloc 00:05:01.152 ************************************ 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:01.152 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.153 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.153 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:01.153 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:01.153 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:01.153 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:01.153 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:01.153 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.153 03:04:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.532 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.532 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:02.532 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.532 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.532 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.532 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.532 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.532 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.532 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.532 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.532 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.532 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.532 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.532 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.532 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.532 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.532 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.532 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:02.532 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.532 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.532 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.532 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.532 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43321820 kB' 'MemAvailable: 46834716 kB' 'Buffers: 3736 kB' 'Cached: 12773476 kB' 'SwapCached: 0 kB' 'Active: 9728484 kB' 'Inactive: 3509488 kB' 'Active(anon): 9333436 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463964 kB' 'Mapped: 199028 kB' 'Shmem: 8872676 kB' 'KReclaimable: 203968 kB' 'Slab: 590032 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386064 kB' 'KernelStack: 12688 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10474000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.533 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43322600 kB' 'MemAvailable: 46835496 kB' 'Buffers: 3736 kB' 'Cached: 12773480 kB' 'SwapCached: 0 kB' 'Active: 9728304 kB' 'Inactive: 3509488 kB' 'Active(anon): 9333256 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463776 kB' 'Mapped: 198912 kB' 'Shmem: 8872680 kB' 'KReclaimable: 203968 kB' 'Slab: 590016 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386048 kB' 'KernelStack: 12720 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10474020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.534 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.535 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43322348 kB' 'MemAvailable: 46835244 kB' 'Buffers: 3736 kB' 'Cached: 12773496 kB' 'SwapCached: 0 kB' 'Active: 9728324 kB' 'Inactive: 3509488 kB' 'Active(anon): 9333276 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463776 kB' 'Mapped: 198912 kB' 'Shmem: 8872696 kB' 'KReclaimable: 203968 kB' 'Slab: 590016 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386048 kB' 'KernelStack: 12720 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10474040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.536 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.537 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.538 nr_hugepages=1024 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.538 resv_hugepages=0 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.538 surplus_hugepages=0 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.538 anon_hugepages=0 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.538 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43322348 kB' 'MemAvailable: 46835244 kB' 'Buffers: 3736 kB' 'Cached: 12773520 kB' 'SwapCached: 0 kB' 'Active: 9728308 kB' 'Inactive: 3509488 kB' 'Active(anon): 9333260 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463764 kB' 'Mapped: 198912 kB' 'Shmem: 8872720 kB' 'KReclaimable: 203968 kB' 'Slab: 590016 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386048 kB' 'KernelStack: 12704 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10474064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.539 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26743916 kB' 'MemUsed: 6085968 kB' 'SwapCached: 0 kB' 'Active: 2657656 kB' 'Inactive: 155448 kB' 'Active(anon): 2496364 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2592048 kB' 'Mapped: 66560 kB' 'AnonPages: 224232 kB' 'Shmem: 2275308 kB' 'KernelStack: 6664 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100052 kB' 'Slab: 332800 kB' 'SReclaimable: 100052 kB' 'SUnreclaim: 232748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.540 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.541 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.542 node0=1024 expecting 1024 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.542 03:04:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.923 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.923 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.923 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.923 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.923 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.923 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.923 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.923 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.923 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.923 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.923 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.923 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.923 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.923 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.923 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.923 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.923 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.923 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43323024 kB' 'MemAvailable: 46835920 kB' 'Buffers: 3736 kB' 'Cached: 12773584 kB' 'SwapCached: 0 kB' 'Active: 9728596 kB' 'Inactive: 3509488 kB' 'Active(anon): 9333548 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463496 kB' 'Mapped: 199096 kB' 'Shmem: 8872784 kB' 'KReclaimable: 203968 kB' 'Slab: 590192 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386224 kB' 'KernelStack: 12768 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10474308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43323024 kB' 'MemAvailable: 46835920 kB' 'Buffers: 3736 kB' 'Cached: 12773588 kB' 'SwapCached: 0 kB' 'Active: 9728008 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332960 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463376 kB' 'Mapped: 198992 kB' 'Shmem: 8872788 kB' 'KReclaimable: 203968 kB' 'Slab: 590192 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386224 kB' 'KernelStack: 12736 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10474328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43322292 kB' 'MemAvailable: 46835188 kB' 'Buffers: 3736 kB' 'Cached: 12773604 kB' 'SwapCached: 0 kB' 'Active: 9728096 kB' 'Inactive: 3509488 kB' 'Active(anon): 9333048 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463428 kB' 'Mapped: 198916 kB' 'Shmem: 8872804 kB' 'KReclaimable: 203968 kB' 'Slab: 590192 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386224 kB' 'KernelStack: 12752 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10474348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.928 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.929 nr_hugepages=1024 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.929 resv_hugepages=0 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.929 surplus_hugepages=0 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.929 anon_hugepages=0 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43322292 kB' 'MemAvailable: 46835188 kB' 'Buffers: 3736 kB' 'Cached: 12773628 kB' 'SwapCached: 0 kB' 'Active: 9727868 kB' 'Inactive: 3509488 kB' 'Active(anon): 9332820 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3509488 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463160 kB' 'Mapped: 198916 kB' 'Shmem: 8872828 kB' 'KReclaimable: 203968 kB' 'Slab: 590192 kB' 'SReclaimable: 203968 kB' 'SUnreclaim: 386224 kB' 'KernelStack: 12736 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10474372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 38592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2410076 kB' 'DirectMap2M: 20578304 kB' 'DirectMap1G: 46137344 kB' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.929 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.930 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26744156 kB' 'MemUsed: 6085728 kB' 'SwapCached: 0 kB' 'Active: 2657480 kB' 'Inactive: 155448 kB' 'Active(anon): 2496188 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 155448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2592152 kB' 'Mapped: 66560 kB' 'AnonPages: 223972 kB' 'Shmem: 2275412 kB' 'KernelStack: 6696 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100052 kB' 'Slab: 332972 kB' 'SReclaimable: 100052 kB' 'SUnreclaim: 232920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.931 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.932 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.933 node0=1024 expecting 1024 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.933 00:05:03.933 real 0m2.771s 00:05:03.933 user 0m1.092s 00:05:03.933 sys 0m1.594s 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.933 03:04:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 ************************************ 00:05:03.933 END TEST no_shrink_alloc 00:05:03.933 ************************************ 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:03.933 03:04:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:03.933 00:05:03.933 real 0m11.209s 00:05:03.933 user 0m4.252s 00:05:03.933 sys 0m5.838s 00:05:03.933 03:04:30 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.933 03:04:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 ************************************ 00:05:03.933 END TEST hugepages 00:05:03.933 ************************************ 00:05:04.192 03:04:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:04.192 03:04:30 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.192 03:04:30 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.192 03:04:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.192 ************************************ 00:05:04.192 START TEST driver 00:05:04.192 ************************************ 00:05:04.192 03:04:30 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:04.192 * Looking for test storage... 00:05:04.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:04.192 03:04:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:04.192 03:04:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.192 03:04:30 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.729 03:04:33 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:06.729 03:04:33 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.729 03:04:33 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.729 03:04:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:06.729 ************************************ 00:05:06.729 START TEST guess_driver 00:05:06.729 ************************************ 00:05:06.729 03:04:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:06.729 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:06.729 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:06.729 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:06.729 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:06.729 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:06.730 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:06.730 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:06.730 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:06.730 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:06.730 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:06.730 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:06.730 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:06.730 Looking for driver=vfio-pci 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.730 03:04:33 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.107 03:04:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.045 03:04:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.045 03:04:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.045 03:04:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.045 03:04:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:09.045 03:04:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:09.045 03:04:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.045 03:04:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.579 00:05:11.579 real 0m4.768s 00:05:11.579 user 0m1.095s 00:05:11.579 sys 0m1.775s 00:05:11.579 03:04:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.579 03:04:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:11.579 ************************************ 00:05:11.579 END TEST guess_driver 00:05:11.579 ************************************ 00:05:11.579 00:05:11.579 real 0m7.361s 00:05:11.579 user 0m1.692s 00:05:11.579 sys 0m2.792s 00:05:11.579 03:04:37 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.579 03:04:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:11.579 ************************************ 00:05:11.579 END TEST driver 00:05:11.579 ************************************ 00:05:11.579 03:04:37 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:11.579 03:04:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.579 03:04:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.579 03:04:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:11.579 ************************************ 00:05:11.579 START TEST devices 00:05:11.579 ************************************ 00:05:11.579 03:04:37 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:11.579 * Looking for test storage... 00:05:11.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:11.579 03:04:37 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:11.579 03:04:37 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:11.579 03:04:37 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.579 03:04:37 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:12.955 03:04:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:12.955 03:04:39 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:12.955 No valid GPT data, bailing 00:05:12.955 03:04:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.955 03:04:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:12.955 03:04:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:12.955 03:04:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:12.955 03:04:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:12.955 03:04:39 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:12.955 03:04:39 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.955 03:04:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:12.955 ************************************ 00:05:12.955 START TEST nvme_mount 00:05:12.955 ************************************ 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:12.955 03:04:39 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:13.892 Creating new GPT entries in memory. 00:05:13.892 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:13.892 other utilities. 00:05:13.892 03:04:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:13.892 03:04:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.892 03:04:40 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:13.892 03:04:40 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:13.892 03:04:40 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:15.269 Creating new GPT entries in memory. 00:05:15.269 The operation has completed successfully. 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 296251 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.269 03:04:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.203 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.204 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:16.204 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.204 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.204 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.204 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:16.462 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.463 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.463 03:04:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.721 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:16.721 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:16.721 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:16.721 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.721 03:04:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.099 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.100 03:04:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.036 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:19.296 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:19.296 00:05:19.296 real 0m6.344s 00:05:19.296 user 0m1.472s 00:05:19.296 sys 0m2.439s 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.296 03:04:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:19.296 ************************************ 00:05:19.296 END TEST nvme_mount 00:05:19.296 ************************************ 00:05:19.296 03:04:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:19.296 03:04:45 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.296 03:04:45 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.296 03:04:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:19.296 ************************************ 00:05:19.296 START TEST dm_mount 00:05:19.296 ************************************ 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:19.296 03:04:45 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:20.672 Creating new GPT entries in memory. 00:05:20.672 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:20.672 other utilities. 00:05:20.672 03:04:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:20.672 03:04:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.672 03:04:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.672 03:04:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.672 03:04:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:21.609 Creating new GPT entries in memory. 00:05:21.609 The operation has completed successfully. 00:05:21.609 03:04:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.609 03:04:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.609 03:04:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:21.609 03:04:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:21.609 03:04:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:22.547 The operation has completed successfully. 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 298641 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.547 03:04:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.481 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.482 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.482 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.482 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.482 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.482 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.482 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.482 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.482 03:04:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.740 03:04:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.698 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:24.964 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:24.964 00:05:24.964 real 0m5.554s 00:05:24.964 user 0m0.882s 00:05:24.964 sys 0m1.530s 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.964 03:04:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:24.964 ************************************ 00:05:24.964 END TEST dm_mount 00:05:24.964 ************************************ 00:05:24.964 03:04:51 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:24.964 03:04:51 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:24.964 03:04:51 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.964 03:04:51 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.964 03:04:51 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:24.964 03:04:51 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.964 03:04:51 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.223 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:25.223 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:25.223 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:25.223 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:25.223 03:04:51 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:25.223 03:04:51 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:25.223 03:04:51 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:25.223 03:04:51 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.223 03:04:51 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:25.223 03:04:51 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.223 03:04:51 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:25.223 00:05:25.223 real 0m13.748s 00:05:25.223 user 0m2.947s 00:05:25.223 sys 0m4.983s 00:05:25.223 03:04:51 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.223 03:04:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 ************************************ 00:05:25.223 END TEST devices 00:05:25.223 ************************************ 00:05:25.223 00:05:25.223 real 0m42.744s 00:05:25.223 user 0m12.104s 00:05:25.223 sys 0m18.818s 00:05:25.223 03:04:51 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.223 03:04:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 ************************************ 00:05:25.223 END TEST setup.sh 00:05:25.223 ************************************ 00:05:25.223 03:04:51 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:26.158 Hugepages 00:05:26.158 node hugesize free / total 00:05:26.418 node0 1048576kB 0 / 0 00:05:26.418 node0 2048kB 2048 / 2048 00:05:26.418 node1 1048576kB 0 / 0 00:05:26.418 node1 2048kB 0 / 0 00:05:26.418 00:05:26.418 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:26.418 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:26.418 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:26.418 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:26.418 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:26.418 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:26.418 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:26.418 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:26.418 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:26.418 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:26.418 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:26.418 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:26.418 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:26.418 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:26.418 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:26.418 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:26.418 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:26.418 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:26.418 03:04:52 -- spdk/autotest.sh@130 -- # uname -s 00:05:26.418 03:04:52 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:26.418 03:04:52 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:26.418 03:04:52 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:27.796 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:27.796 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:27.796 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:27.796 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:27.796 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:27.796 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:27.796 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:27.796 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:27.796 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:27.796 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:27.796 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:27.796 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:27.796 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:27.796 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:27.796 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:27.796 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:28.735 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:28.735 03:04:55 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:29.673 03:04:56 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:29.673 03:04:56 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:29.673 03:04:56 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:29.673 03:04:56 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:29.673 03:04:56 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:29.673 03:04:56 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:29.673 03:04:56 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.673 03:04:56 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:29.673 03:04:56 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:29.931 03:04:56 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:29.931 03:04:56 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:29.931 03:04:56 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.865 Waiting for block devices as requested 00:05:30.865 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:30.865 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:31.123 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:31.123 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:31.123 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:31.382 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:31.382 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:31.382 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:31.382 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:31.640 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:31.640 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:31.640 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:31.640 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:31.898 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:31.898 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:31.898 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:31.898 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:32.157 03:04:58 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:32.157 03:04:58 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:32.157 03:04:58 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:32.157 03:04:58 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:32.157 03:04:58 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:32.157 03:04:58 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:32.157 03:04:58 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:32.157 03:04:58 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:32.157 03:04:58 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:32.157 03:04:58 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:32.158 03:04:58 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:32.158 03:04:58 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:32.158 03:04:58 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:32.158 03:04:58 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:32.158 03:04:58 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:32.158 03:04:58 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:32.158 03:04:58 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:32.158 03:04:58 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:32.158 03:04:58 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:32.158 03:04:58 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:32.158 03:04:58 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:32.158 03:04:58 -- common/autotest_common.sh@1553 -- # continue 00:05:32.158 03:04:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:32.158 03:04:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.158 03:04:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.158 03:04:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:32.158 03:04:58 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:32.158 03:04:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.158 03:04:58 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:33.092 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:33.351 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:33.351 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:33.351 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:33.351 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:33.351 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:33.351 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:33.351 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:33.351 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:33.351 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:33.351 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:33.351 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:33.351 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:33.351 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:33.351 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:33.351 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:34.288 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:34.288 03:05:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:34.288 03:05:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.288 03:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.288 03:05:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:34.288 03:05:00 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:34.288 03:05:00 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:34.288 03:05:00 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:34.288 03:05:00 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:34.288 03:05:00 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:34.288 03:05:00 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:34.288 03:05:00 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:34.288 03:05:00 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.288 03:05:00 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:34.288 03:05:00 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:34.546 03:05:00 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:34.546 03:05:00 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:34.546 03:05:00 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:34.546 03:05:00 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:34.546 03:05:00 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:34.546 03:05:00 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:34.546 03:05:00 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:34.546 03:05:00 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:34.546 03:05:00 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:34.546 03:05:00 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=303800 00:05:34.546 03:05:00 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.546 03:05:00 -- common/autotest_common.sh@1594 -- # waitforlisten 303800 00:05:34.546 03:05:00 -- common/autotest_common.sh@827 -- # '[' -z 303800 ']' 00:05:34.546 03:05:00 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.546 03:05:00 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.546 03:05:00 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.546 03:05:00 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.546 03:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.546 [2024-07-23 03:05:00.986359] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:34.546 [2024-07-23 03:05:00.986468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303800 ] 00:05:34.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.546 [2024-07-23 03:05:01.050936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.804 [2024-07-23 03:05:01.144046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.062 03:05:01 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:35.062 03:05:01 -- common/autotest_common.sh@860 -- # return 0 00:05:35.062 03:05:01 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:35.062 03:05:01 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:35.062 03:05:01 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:38.339 nvme0n1 00:05:38.339 03:05:04 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:38.339 [2024-07-23 03:05:04.705051] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:38.339 [2024-07-23 03:05:04.705099] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:38.339 request: 00:05:38.339 { 00:05:38.339 "nvme_ctrlr_name": "nvme0", 00:05:38.339 "password": "test", 00:05:38.339 "method": "bdev_nvme_opal_revert", 00:05:38.339 "req_id": 1 00:05:38.339 } 00:05:38.339 Got JSON-RPC error response 00:05:38.339 response: 00:05:38.339 { 00:05:38.339 "code": -32603, 00:05:38.339 "message": "Internal error" 00:05:38.339 } 00:05:38.339 03:05:04 -- common/autotest_common.sh@1600 -- # true 00:05:38.339 03:05:04 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:38.339 03:05:04 -- common/autotest_common.sh@1604 -- # killprocess 303800 00:05:38.339 03:05:04 -- common/autotest_common.sh@946 -- # '[' -z 303800 ']' 00:05:38.339 03:05:04 -- common/autotest_common.sh@950 -- # kill -0 303800 00:05:38.339 03:05:04 -- common/autotest_common.sh@951 -- # uname 00:05:38.339 03:05:04 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.339 03:05:04 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 303800 00:05:38.339 03:05:04 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.339 03:05:04 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.339 03:05:04 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 303800' 00:05:38.339 killing process with pid 303800 00:05:38.339 03:05:04 -- common/autotest_common.sh@965 -- # kill 303800 00:05:38.339 03:05:04 -- common/autotest_common.sh@970 -- # wait 303800 00:05:40.237 03:05:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:40.237 03:05:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:40.237 03:05:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:40.237 03:05:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:40.237 03:05:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:40.237 03:05:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:40.237 03:05:06 -- common/autotest_common.sh@10 -- # set +x 00:05:40.237 03:05:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:40.237 03:05:06 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:40.237 03:05:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.237 03:05:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.237 03:05:06 -- common/autotest_common.sh@10 -- # set +x 00:05:40.237 ************************************ 00:05:40.237 START TEST env 00:05:40.237 ************************************ 00:05:40.237 03:05:06 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:40.237 * Looking for test storage... 00:05:40.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:40.237 03:05:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:40.237 03:05:06 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.237 03:05:06 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.237 03:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.237 ************************************ 00:05:40.237 START TEST env_memory 00:05:40.237 ************************************ 00:05:40.237 03:05:06 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:40.237 00:05:40.237 00:05:40.237 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.237 http://cunit.sourceforge.net/ 00:05:40.237 00:05:40.237 00:05:40.237 Suite: memory 00:05:40.237 Test: alloc and free memory map ...[2024-07-23 03:05:06.646445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:40.237 passed 00:05:40.237 Test: mem map translation ...[2024-07-23 03:05:06.666845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:40.237 [2024-07-23 03:05:06.666867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:40.237 [2024-07-23 03:05:06.666935] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:40.237 [2024-07-23 03:05:06.666948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:40.237 passed 00:05:40.237 Test: mem map registration ...[2024-07-23 03:05:06.709903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:40.237 [2024-07-23 03:05:06.709923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:40.237 passed 00:05:40.237 Test: mem map adjacent registrations ...passed 00:05:40.237 00:05:40.237 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.237 suites 1 1 n/a 0 0 00:05:40.237 tests 4 4 4 0 0 00:05:40.237 asserts 152 152 152 0 n/a 00:05:40.237 00:05:40.237 Elapsed time = 0.143 seconds 00:05:40.237 00:05:40.237 real 0m0.152s 00:05:40.237 user 0m0.141s 00:05:40.237 sys 0m0.010s 00:05:40.237 03:05:06 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.237 03:05:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:40.237 ************************************ 00:05:40.237 END TEST env_memory 00:05:40.237 ************************************ 00:05:40.237 03:05:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:40.237 03:05:06 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.237 03:05:06 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.237 03:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.237 ************************************ 00:05:40.237 START TEST env_vtophys 00:05:40.237 ************************************ 00:05:40.237 03:05:06 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:40.496 EAL: lib.eal log level changed from notice to debug 00:05:40.496 EAL: Detected lcore 0 as core 0 on socket 0 00:05:40.496 EAL: Detected lcore 1 as core 1 on socket 0 00:05:40.496 EAL: Detected lcore 2 as core 2 on socket 0 00:05:40.496 EAL: Detected lcore 3 as core 3 on socket 0 00:05:40.496 EAL: Detected lcore 4 as core 4 on socket 0 00:05:40.496 EAL: Detected lcore 5 as core 5 on socket 0 00:05:40.496 EAL: Detected lcore 6 as core 8 on socket 0 00:05:40.496 EAL: Detected lcore 7 as core 9 on socket 0 00:05:40.496 EAL: Detected lcore 8 as core 10 on socket 0 00:05:40.496 EAL: Detected lcore 9 as core 11 on socket 0 00:05:40.496 EAL: Detected lcore 10 as core 12 on socket 0 00:05:40.496 EAL: Detected lcore 11 as core 13 on socket 0 00:05:40.496 EAL: Detected lcore 12 as core 0 on socket 1 00:05:40.496 EAL: Detected lcore 13 as core 1 on socket 1 00:05:40.496 EAL: Detected lcore 14 as core 2 on socket 1 00:05:40.496 EAL: Detected lcore 15 as core 3 on socket 1 00:05:40.496 EAL: Detected lcore 16 as core 4 on socket 1 00:05:40.496 EAL: Detected lcore 17 as core 5 on socket 1 00:05:40.496 EAL: Detected lcore 18 as core 8 on socket 1 00:05:40.496 EAL: Detected lcore 19 as core 9 on socket 1 00:05:40.496 EAL: Detected lcore 20 as core 10 on socket 1 00:05:40.496 EAL: Detected lcore 21 as core 11 on socket 1 00:05:40.496 EAL: Detected lcore 22 as core 12 on socket 1 00:05:40.496 EAL: Detected lcore 23 as core 13 on socket 1 00:05:40.496 EAL: Detected lcore 24 as core 0 on socket 0 00:05:40.497 EAL: Detected lcore 25 as core 1 on socket 0 00:05:40.497 EAL: Detected lcore 26 as core 2 on socket 0 00:05:40.497 EAL: Detected lcore 27 as core 3 on socket 0 00:05:40.497 EAL: Detected lcore 28 as core 4 on socket 0 00:05:40.497 EAL: Detected lcore 29 as core 5 on socket 0 00:05:40.497 EAL: Detected lcore 30 as core 8 on socket 0 00:05:40.497 EAL: Detected lcore 31 as core 9 on socket 0 00:05:40.497 EAL: Detected lcore 32 as core 10 on socket 0 00:05:40.497 EAL: Detected lcore 33 as core 11 on socket 0 00:05:40.497 EAL: Detected lcore 34 as core 12 on socket 0 00:05:40.497 EAL: Detected lcore 35 as core 13 on socket 0 00:05:40.497 EAL: Detected lcore 36 as core 0 on socket 1 00:05:40.497 EAL: Detected lcore 37 as core 1 on socket 1 00:05:40.497 EAL: Detected lcore 38 as core 2 on socket 1 00:05:40.497 EAL: Detected lcore 39 as core 3 on socket 1 00:05:40.497 EAL: Detected lcore 40 as core 4 on socket 1 00:05:40.497 EAL: Detected lcore 41 as core 5 on socket 1 00:05:40.497 EAL: Detected lcore 42 as core 8 on socket 1 00:05:40.497 EAL: Detected lcore 43 as core 9 on socket 1 00:05:40.497 EAL: Detected lcore 44 as core 10 on socket 1 00:05:40.497 EAL: Detected lcore 45 as core 11 on socket 1 00:05:40.497 EAL: Detected lcore 46 as core 12 on socket 1 00:05:40.497 EAL: Detected lcore 47 as core 13 on socket 1 00:05:40.497 EAL: Maximum logical cores by configuration: 128 00:05:40.497 EAL: Detected CPU lcores: 48 00:05:40.497 EAL: Detected NUMA nodes: 2 00:05:40.497 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:40.497 EAL: Detected shared linkage of DPDK 00:05:40.497 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:40.497 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:40.497 EAL: Registered [vdev] bus. 00:05:40.497 EAL: bus.vdev log level changed from disabled to notice 00:05:40.497 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:40.497 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:40.497 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:40.497 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:40.497 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:40.497 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:40.497 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:40.497 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:40.497 EAL: No shared files mode enabled, IPC will be disabled 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Bus pci wants IOVA as 'DC' 00:05:40.497 EAL: Bus vdev wants IOVA as 'DC' 00:05:40.497 EAL: Buses did not request a specific IOVA mode. 00:05:40.497 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:40.497 EAL: Selected IOVA mode 'VA' 00:05:40.497 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.497 EAL: Probing VFIO support... 00:05:40.497 EAL: IOMMU type 1 (Type 1) is supported 00:05:40.497 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:40.497 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:40.497 EAL: VFIO support initialized 00:05:40.497 EAL: Ask a virtual area of 0x2e000 bytes 00:05:40.497 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:40.497 EAL: Setting up physically contiguous memory... 00:05:40.497 EAL: Setting maximum number of open files to 524288 00:05:40.497 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:40.497 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:40.497 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:40.497 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.497 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:40.497 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.497 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.497 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:40.497 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:40.497 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.497 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:40.497 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.497 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.497 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:40.497 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:40.497 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.497 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:40.497 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.497 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.497 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:40.497 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:40.497 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.497 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:40.497 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.497 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.497 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:40.497 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:40.497 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:40.497 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.497 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:40.497 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.497 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.497 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:40.497 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:40.497 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.497 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:40.497 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.497 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.497 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:40.497 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:40.497 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.497 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:40.497 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.497 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.497 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:40.497 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:40.497 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.497 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:40.497 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.497 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.497 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:40.497 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:40.497 EAL: Hugepages will be freed exactly as allocated. 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: TSC frequency is ~2700000 KHz 00:05:40.497 EAL: Main lcore 0 is ready (tid=7feca6863a00;cpuset=[0]) 00:05:40.497 EAL: Trying to obtain current memory policy. 00:05:40.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.497 EAL: Restoring previous memory policy: 0 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was expanded by 2MB 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:40.497 EAL: Mem event callback 'spdk:(nil)' registered 00:05:40.497 00:05:40.497 00:05:40.497 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.497 http://cunit.sourceforge.net/ 00:05:40.497 00:05:40.497 00:05:40.497 Suite: components_suite 00:05:40.497 Test: vtophys_malloc_test ...passed 00:05:40.497 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:40.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.497 EAL: Restoring previous memory policy: 4 00:05:40.497 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was expanded by 4MB 00:05:40.497 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was shrunk by 4MB 00:05:40.497 EAL: Trying to obtain current memory policy. 00:05:40.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.497 EAL: Restoring previous memory policy: 4 00:05:40.497 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was expanded by 6MB 00:05:40.497 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was shrunk by 6MB 00:05:40.497 EAL: Trying to obtain current memory policy. 00:05:40.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.497 EAL: Restoring previous memory policy: 4 00:05:40.497 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was expanded by 10MB 00:05:40.497 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was shrunk by 10MB 00:05:40.497 EAL: Trying to obtain current memory policy. 00:05:40.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.497 EAL: Restoring previous memory policy: 4 00:05:40.497 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was expanded by 18MB 00:05:40.497 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.497 EAL: request: mp_malloc_sync 00:05:40.497 EAL: No shared files mode enabled, IPC is disabled 00:05:40.497 EAL: Heap on socket 0 was shrunk by 18MB 00:05:40.497 EAL: Trying to obtain current memory policy. 00:05:40.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.498 EAL: Restoring previous memory policy: 4 00:05:40.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.498 EAL: request: mp_malloc_sync 00:05:40.498 EAL: No shared files mode enabled, IPC is disabled 00:05:40.498 EAL: Heap on socket 0 was expanded by 34MB 00:05:40.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.498 EAL: request: mp_malloc_sync 00:05:40.498 EAL: No shared files mode enabled, IPC is disabled 00:05:40.498 EAL: Heap on socket 0 was shrunk by 34MB 00:05:40.498 EAL: Trying to obtain current memory policy. 00:05:40.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.498 EAL: Restoring previous memory policy: 4 00:05:40.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.498 EAL: request: mp_malloc_sync 00:05:40.498 EAL: No shared files mode enabled, IPC is disabled 00:05:40.498 EAL: Heap on socket 0 was expanded by 66MB 00:05:40.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.498 EAL: request: mp_malloc_sync 00:05:40.498 EAL: No shared files mode enabled, IPC is disabled 00:05:40.498 EAL: Heap on socket 0 was shrunk by 66MB 00:05:40.498 EAL: Trying to obtain current memory policy. 00:05:40.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.498 EAL: Restoring previous memory policy: 4 00:05:40.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.498 EAL: request: mp_malloc_sync 00:05:40.498 EAL: No shared files mode enabled, IPC is disabled 00:05:40.498 EAL: Heap on socket 0 was expanded by 130MB 00:05:40.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.498 EAL: request: mp_malloc_sync 00:05:40.498 EAL: No shared files mode enabled, IPC is disabled 00:05:40.498 EAL: Heap on socket 0 was shrunk by 130MB 00:05:40.498 EAL: Trying to obtain current memory policy. 00:05:40.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.756 EAL: Restoring previous memory policy: 4 00:05:40.756 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.756 EAL: request: mp_malloc_sync 00:05:40.756 EAL: No shared files mode enabled, IPC is disabled 00:05:40.756 EAL: Heap on socket 0 was expanded by 258MB 00:05:40.756 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.756 EAL: request: mp_malloc_sync 00:05:40.756 EAL: No shared files mode enabled, IPC is disabled 00:05:40.756 EAL: Heap on socket 0 was shrunk by 258MB 00:05:40.756 EAL: Trying to obtain current memory policy. 00:05:40.756 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.014 EAL: Restoring previous memory policy: 4 00:05:41.014 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.014 EAL: request: mp_malloc_sync 00:05:41.014 EAL: No shared files mode enabled, IPC is disabled 00:05:41.014 EAL: Heap on socket 0 was expanded by 514MB 00:05:41.014 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.014 EAL: request: mp_malloc_sync 00:05:41.014 EAL: No shared files mode enabled, IPC is disabled 00:05:41.014 EAL: Heap on socket 0 was shrunk by 514MB 00:05:41.014 EAL: Trying to obtain current memory policy. 00:05:41.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.620 EAL: Restoring previous memory policy: 4 00:05:41.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.620 EAL: request: mp_malloc_sync 00:05:41.620 EAL: No shared files mode enabled, IPC is disabled 00:05:41.620 EAL: Heap on socket 0 was expanded by 1026MB 00:05:41.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.878 EAL: request: mp_malloc_sync 00:05:41.879 EAL: No shared files mode enabled, IPC is disabled 00:05:41.879 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:41.879 passed 00:05:41.879 00:05:41.879 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.879 suites 1 1 n/a 0 0 00:05:41.879 tests 2 2 2 0 0 00:05:41.879 asserts 497 497 497 0 n/a 00:05:41.879 00:05:41.879 Elapsed time = 1.374 seconds 00:05:41.879 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.879 EAL: request: mp_malloc_sync 00:05:41.879 EAL: No shared files mode enabled, IPC is disabled 00:05:41.879 EAL: Heap on socket 0 was shrunk by 2MB 00:05:41.879 EAL: No shared files mode enabled, IPC is disabled 00:05:41.879 EAL: No shared files mode enabled, IPC is disabled 00:05:41.879 EAL: No shared files mode enabled, IPC is disabled 00:05:41.879 00:05:41.879 real 0m1.504s 00:05:41.879 user 0m0.866s 00:05:41.879 sys 0m0.594s 00:05:41.879 03:05:08 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.879 03:05:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:41.879 ************************************ 00:05:41.879 END TEST env_vtophys 00:05:41.879 ************************************ 00:05:41.879 03:05:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.879 03:05:08 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.879 03:05:08 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.879 03:05:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.879 ************************************ 00:05:41.879 START TEST env_pci 00:05:41.879 ************************************ 00:05:41.879 03:05:08 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.879 00:05:41.879 00:05:41.879 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.879 http://cunit.sourceforge.net/ 00:05:41.879 00:05:41.879 00:05:41.879 Suite: pci 00:05:41.879 Test: pci_hook ...[2024-07-23 03:05:08.373834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 304704 has claimed it 00:05:41.879 EAL: Cannot find device (10000:00:01.0) 00:05:41.879 EAL: Failed to attach device on primary process 00:05:41.879 passed 00:05:41.879 00:05:41.879 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.879 suites 1 1 n/a 0 0 00:05:41.879 tests 1 1 1 0 0 00:05:41.879 asserts 25 25 25 0 n/a 00:05:41.879 00:05:41.879 Elapsed time = 0.021 seconds 00:05:41.879 00:05:41.879 real 0m0.034s 00:05:41.879 user 0m0.010s 00:05:41.879 sys 0m0.024s 00:05:41.879 03:05:08 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.879 03:05:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:41.879 ************************************ 00:05:41.879 END TEST env_pci 00:05:41.879 ************************************ 00:05:41.879 03:05:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:41.879 03:05:08 env -- env/env.sh@15 -- # uname 00:05:41.879 03:05:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:41.879 03:05:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:41.879 03:05:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.879 03:05:08 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:41.879 03:05:08 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.879 03:05:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.879 ************************************ 00:05:41.879 START TEST env_dpdk_post_init 00:05:41.879 ************************************ 00:05:41.879 03:05:08 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.137 EAL: Detected CPU lcores: 48 00:05:42.137 EAL: Detected NUMA nodes: 2 00:05:42.137 EAL: Detected shared linkage of DPDK 00:05:42.137 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.137 EAL: Selected IOVA mode 'VA' 00:05:42.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.137 EAL: VFIO support initialized 00:05:42.137 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.137 EAL: Using IOMMU type 1 (Type 1) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:42.137 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:42.396 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:42.396 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:42.396 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:42.963 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:46.243 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:46.243 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:46.502 Starting DPDK initialization... 00:05:46.502 Starting SPDK post initialization... 00:05:46.502 SPDK NVMe probe 00:05:46.502 Attaching to 0000:88:00.0 00:05:46.502 Attached to 0000:88:00.0 00:05:46.502 Cleaning up... 00:05:46.502 00:05:46.502 real 0m4.387s 00:05:46.502 user 0m3.263s 00:05:46.502 sys 0m0.187s 00:05:46.502 03:05:12 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.502 03:05:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.502 ************************************ 00:05:46.502 END TEST env_dpdk_post_init 00:05:46.502 ************************************ 00:05:46.502 03:05:12 env -- env/env.sh@26 -- # uname 00:05:46.502 03:05:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:46.502 03:05:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.502 03:05:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.502 03:05:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.502 03:05:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.502 ************************************ 00:05:46.502 START TEST env_mem_callbacks 00:05:46.502 ************************************ 00:05:46.502 03:05:12 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.502 EAL: Detected CPU lcores: 48 00:05:46.502 EAL: Detected NUMA nodes: 2 00:05:46.502 EAL: Detected shared linkage of DPDK 00:05:46.502 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.502 EAL: Selected IOVA mode 'VA' 00:05:46.502 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.502 EAL: VFIO support initialized 00:05:46.502 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.502 00:05:46.502 00:05:46.502 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.502 http://cunit.sourceforge.net/ 00:05:46.502 00:05:46.502 00:05:46.502 Suite: memory 00:05:46.502 Test: test ... 00:05:46.502 register 0x200000200000 2097152 00:05:46.502 malloc 3145728 00:05:46.502 register 0x200000400000 4194304 00:05:46.502 buf 0x200000500000 len 3145728 PASSED 00:05:46.502 malloc 64 00:05:46.502 buf 0x2000004fff40 len 64 PASSED 00:05:46.502 malloc 4194304 00:05:46.502 register 0x200000800000 6291456 00:05:46.502 buf 0x200000a00000 len 4194304 PASSED 00:05:46.502 free 0x200000500000 3145728 00:05:46.502 free 0x2000004fff40 64 00:05:46.502 unregister 0x200000400000 4194304 PASSED 00:05:46.502 free 0x200000a00000 4194304 00:05:46.502 unregister 0x200000800000 6291456 PASSED 00:05:46.502 malloc 8388608 00:05:46.502 register 0x200000400000 10485760 00:05:46.502 buf 0x200000600000 len 8388608 PASSED 00:05:46.502 free 0x200000600000 8388608 00:05:46.502 unregister 0x200000400000 10485760 PASSED 00:05:46.502 passed 00:05:46.502 00:05:46.502 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.502 suites 1 1 n/a 0 0 00:05:46.502 tests 1 1 1 0 0 00:05:46.502 asserts 15 15 15 0 n/a 00:05:46.502 00:05:46.502 Elapsed time = 0.005 seconds 00:05:46.502 00:05:46.502 real 0m0.049s 00:05:46.502 user 0m0.016s 00:05:46.502 sys 0m0.033s 00:05:46.502 03:05:12 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.502 03:05:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.502 ************************************ 00:05:46.502 END TEST env_mem_callbacks 00:05:46.502 ************************************ 00:05:46.502 00:05:46.502 real 0m6.411s 00:05:46.502 user 0m4.419s 00:05:46.502 sys 0m1.029s 00:05:46.502 03:05:12 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.502 03:05:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.502 ************************************ 00:05:46.502 END TEST env 00:05:46.502 ************************************ 00:05:46.502 03:05:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:46.502 03:05:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.502 03:05:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.502 03:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:46.502 ************************************ 00:05:46.502 START TEST rpc 00:05:46.502 ************************************ 00:05:46.502 03:05:13 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:46.502 * Looking for test storage... 00:05:46.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.502 03:05:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=305360 00:05:46.502 03:05:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:46.502 03:05:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.502 03:05:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 305360 00:05:46.502 03:05:13 rpc -- common/autotest_common.sh@827 -- # '[' -z 305360 ']' 00:05:46.502 03:05:13 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.502 03:05:13 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.502 03:05:13 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.502 03:05:13 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.502 03:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.761 [2024-07-23 03:05:13.106134] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:46.761 [2024-07-23 03:05:13.106227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305360 ] 00:05:46.761 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.761 [2024-07-23 03:05:13.166667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.761 [2024-07-23 03:05:13.259670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.761 [2024-07-23 03:05:13.259725] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 305360' to capture a snapshot of events at runtime. 00:05:46.761 [2024-07-23 03:05:13.259740] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.761 [2024-07-23 03:05:13.259752] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.761 [2024-07-23 03:05:13.259763] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid305360 for offline analysis/debug. 00:05:46.761 [2024-07-23 03:05:13.259789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.020 03:05:13 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.020 03:05:13 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:47.020 03:05:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.020 03:05:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.020 03:05:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.020 03:05:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.020 03:05:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.020 03:05:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.020 03:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.020 ************************************ 00:05:47.020 START TEST rpc_integrity 00:05:47.020 ************************************ 00:05:47.020 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:47.020 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.020 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.020 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.020 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.020 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.020 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.020 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.020 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.020 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.020 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.278 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.278 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.278 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.278 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.278 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.278 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.278 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.278 { 00:05:47.278 "name": "Malloc0", 00:05:47.278 "aliases": [ 00:05:47.278 "8b0cea27-2248-4eb4-9158-c04d1b61bd6d" 00:05:47.278 ], 00:05:47.278 "product_name": "Malloc disk", 00:05:47.278 "block_size": 512, 00:05:47.278 "num_blocks": 16384, 00:05:47.278 "uuid": "8b0cea27-2248-4eb4-9158-c04d1b61bd6d", 00:05:47.278 "assigned_rate_limits": { 00:05:47.278 "rw_ios_per_sec": 0, 00:05:47.278 "rw_mbytes_per_sec": 0, 00:05:47.278 "r_mbytes_per_sec": 0, 00:05:47.278 "w_mbytes_per_sec": 0 00:05:47.278 }, 00:05:47.278 "claimed": false, 00:05:47.278 "zoned": false, 00:05:47.278 "supported_io_types": { 00:05:47.278 "read": true, 00:05:47.278 "write": true, 00:05:47.278 "unmap": true, 00:05:47.278 "write_zeroes": true, 00:05:47.278 "flush": true, 00:05:47.278 "reset": true, 00:05:47.278 "compare": false, 00:05:47.278 "compare_and_write": false, 00:05:47.278 "abort": true, 00:05:47.279 "nvme_admin": false, 00:05:47.279 "nvme_io": false 00:05:47.279 }, 00:05:47.279 "memory_domains": [ 00:05:47.279 { 00:05:47.279 "dma_device_id": "system", 00:05:47.279 "dma_device_type": 1 00:05:47.279 }, 00:05:47.279 { 00:05:47.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.279 "dma_device_type": 2 00:05:47.279 } 00:05:47.279 ], 00:05:47.279 "driver_specific": {} 00:05:47.279 } 00:05:47.279 ]' 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 [2024-07-23 03:05:13.652427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.279 [2024-07-23 03:05:13.652473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.279 [2024-07-23 03:05:13.652498] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe4ed60 00:05:47.279 [2024-07-23 03:05:13.652513] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.279 [2024-07-23 03:05:13.654018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.279 [2024-07-23 03:05:13.654046] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.279 Passthru0 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.279 { 00:05:47.279 "name": "Malloc0", 00:05:47.279 "aliases": [ 00:05:47.279 "8b0cea27-2248-4eb4-9158-c04d1b61bd6d" 00:05:47.279 ], 00:05:47.279 "product_name": "Malloc disk", 00:05:47.279 "block_size": 512, 00:05:47.279 "num_blocks": 16384, 00:05:47.279 "uuid": "8b0cea27-2248-4eb4-9158-c04d1b61bd6d", 00:05:47.279 "assigned_rate_limits": { 00:05:47.279 "rw_ios_per_sec": 0, 00:05:47.279 "rw_mbytes_per_sec": 0, 00:05:47.279 "r_mbytes_per_sec": 0, 00:05:47.279 "w_mbytes_per_sec": 0 00:05:47.279 }, 00:05:47.279 "claimed": true, 00:05:47.279 "claim_type": "exclusive_write", 00:05:47.279 "zoned": false, 00:05:47.279 "supported_io_types": { 00:05:47.279 "read": true, 00:05:47.279 "write": true, 00:05:47.279 "unmap": true, 00:05:47.279 "write_zeroes": true, 00:05:47.279 "flush": true, 00:05:47.279 "reset": true, 00:05:47.279 "compare": false, 00:05:47.279 "compare_and_write": false, 00:05:47.279 "abort": true, 00:05:47.279 "nvme_admin": false, 00:05:47.279 "nvme_io": false 00:05:47.279 }, 00:05:47.279 "memory_domains": [ 00:05:47.279 { 00:05:47.279 "dma_device_id": "system", 00:05:47.279 "dma_device_type": 1 00:05:47.279 }, 00:05:47.279 { 00:05:47.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.279 "dma_device_type": 2 00:05:47.279 } 00:05:47.279 ], 00:05:47.279 "driver_specific": {} 00:05:47.279 }, 00:05:47.279 { 00:05:47.279 "name": "Passthru0", 00:05:47.279 "aliases": [ 00:05:47.279 "4001a9c4-6dd1-53e6-8a92-2cce33a8e309" 00:05:47.279 ], 00:05:47.279 "product_name": "passthru", 00:05:47.279 "block_size": 512, 00:05:47.279 "num_blocks": 16384, 00:05:47.279 "uuid": "4001a9c4-6dd1-53e6-8a92-2cce33a8e309", 00:05:47.279 "assigned_rate_limits": { 00:05:47.279 "rw_ios_per_sec": 0, 00:05:47.279 "rw_mbytes_per_sec": 0, 00:05:47.279 "r_mbytes_per_sec": 0, 00:05:47.279 "w_mbytes_per_sec": 0 00:05:47.279 }, 00:05:47.279 "claimed": false, 00:05:47.279 "zoned": false, 00:05:47.279 "supported_io_types": { 00:05:47.279 "read": true, 00:05:47.279 "write": true, 00:05:47.279 "unmap": true, 00:05:47.279 "write_zeroes": true, 00:05:47.279 "flush": true, 00:05:47.279 "reset": true, 00:05:47.279 "compare": false, 00:05:47.279 "compare_and_write": false, 00:05:47.279 "abort": true, 00:05:47.279 "nvme_admin": false, 00:05:47.279 "nvme_io": false 00:05:47.279 }, 00:05:47.279 "memory_domains": [ 00:05:47.279 { 00:05:47.279 "dma_device_id": "system", 00:05:47.279 "dma_device_type": 1 00:05:47.279 }, 00:05:47.279 { 00:05:47.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.279 "dma_device_type": 2 00:05:47.279 } 00:05:47.279 ], 00:05:47.279 "driver_specific": { 00:05:47.279 "passthru": { 00:05:47.279 "name": "Passthru0", 00:05:47.279 "base_bdev_name": "Malloc0" 00:05:47.279 } 00:05:47.279 } 00:05:47.279 } 00:05:47.279 ]' 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.279 03:05:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.279 00:05:47.279 real 0m0.227s 00:05:47.279 user 0m0.145s 00:05:47.279 sys 0m0.026s 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.279 03:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 ************************************ 00:05:47.279 END TEST rpc_integrity 00:05:47.279 ************************************ 00:05:47.279 03:05:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:47.279 03:05:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.279 03:05:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.279 03:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 ************************************ 00:05:47.279 START TEST rpc_plugins 00:05:47.279 ************************************ 00:05:47.279 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:47.279 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:47.279 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.279 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.279 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:47.279 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:47.279 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.279 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.279 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.279 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:47.279 { 00:05:47.279 "name": "Malloc1", 00:05:47.279 "aliases": [ 00:05:47.279 "322a01be-7284-4934-ba16-278630191a96" 00:05:47.279 ], 00:05:47.279 "product_name": "Malloc disk", 00:05:47.279 "block_size": 4096, 00:05:47.279 "num_blocks": 256, 00:05:47.279 "uuid": "322a01be-7284-4934-ba16-278630191a96", 00:05:47.279 "assigned_rate_limits": { 00:05:47.279 "rw_ios_per_sec": 0, 00:05:47.279 "rw_mbytes_per_sec": 0, 00:05:47.279 "r_mbytes_per_sec": 0, 00:05:47.279 "w_mbytes_per_sec": 0 00:05:47.279 }, 00:05:47.279 "claimed": false, 00:05:47.279 "zoned": false, 00:05:47.279 "supported_io_types": { 00:05:47.279 "read": true, 00:05:47.279 "write": true, 00:05:47.279 "unmap": true, 00:05:47.279 "write_zeroes": true, 00:05:47.279 "flush": true, 00:05:47.279 "reset": true, 00:05:47.279 "compare": false, 00:05:47.279 "compare_and_write": false, 00:05:47.279 "abort": true, 00:05:47.279 "nvme_admin": false, 00:05:47.279 "nvme_io": false 00:05:47.279 }, 00:05:47.279 "memory_domains": [ 00:05:47.279 { 00:05:47.279 "dma_device_id": "system", 00:05:47.279 "dma_device_type": 1 00:05:47.279 }, 00:05:47.279 { 00:05:47.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.279 "dma_device_type": 2 00:05:47.279 } 00:05:47.279 ], 00:05:47.279 "driver_specific": {} 00:05:47.279 } 00:05:47.279 ]' 00:05:47.279 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:47.538 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:47.538 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:47.538 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.538 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.538 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.538 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:47.538 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.538 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.538 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.538 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:47.538 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:47.538 03:05:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:47.538 00:05:47.538 real 0m0.113s 00:05:47.538 user 0m0.073s 00:05:47.538 sys 0m0.011s 00:05:47.538 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.538 03:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.538 ************************************ 00:05:47.538 END TEST rpc_plugins 00:05:47.538 ************************************ 00:05:47.538 03:05:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:47.538 03:05:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.538 03:05:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.538 03:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.538 ************************************ 00:05:47.538 START TEST rpc_trace_cmd_test 00:05:47.538 ************************************ 00:05:47.538 03:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:47.538 03:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:47.538 03:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:47.538 03:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.538 03:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.538 03:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.538 03:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:47.538 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid305360", 00:05:47.538 "tpoint_group_mask": "0x8", 00:05:47.538 "iscsi_conn": { 00:05:47.538 "mask": "0x2", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "scsi": { 00:05:47.538 "mask": "0x4", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "bdev": { 00:05:47.538 "mask": "0x8", 00:05:47.538 "tpoint_mask": "0xffffffffffffffff" 00:05:47.538 }, 00:05:47.538 "nvmf_rdma": { 00:05:47.538 "mask": "0x10", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "nvmf_tcp": { 00:05:47.538 "mask": "0x20", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "ftl": { 00:05:47.538 "mask": "0x40", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "blobfs": { 00:05:47.538 "mask": "0x80", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "dsa": { 00:05:47.538 "mask": "0x200", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "thread": { 00:05:47.538 "mask": "0x400", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "nvme_pcie": { 00:05:47.538 "mask": "0x800", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "iaa": { 00:05:47.538 "mask": "0x1000", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "nvme_tcp": { 00:05:47.538 "mask": "0x2000", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "bdev_nvme": { 00:05:47.538 "mask": "0x4000", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 }, 00:05:47.538 "sock": { 00:05:47.538 "mask": "0x8000", 00:05:47.538 "tpoint_mask": "0x0" 00:05:47.538 } 00:05:47.538 }' 00:05:47.538 03:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:47.538 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:47.538 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:47.538 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:47.538 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:47.538 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:47.538 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:47.797 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:47.797 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:47.797 03:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:47.797 00:05:47.797 real 0m0.198s 00:05:47.797 user 0m0.179s 00:05:47.797 sys 0m0.012s 00:05:47.797 03:05:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.797 03:05:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 ************************************ 00:05:47.797 END TEST rpc_trace_cmd_test 00:05:47.797 ************************************ 00:05:47.797 03:05:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:47.797 03:05:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:47.797 03:05:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:47.797 03:05:14 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.797 03:05:14 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.797 03:05:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 ************************************ 00:05:47.797 START TEST rpc_daemon_integrity 00:05:47.797 ************************************ 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.797 { 00:05:47.797 "name": "Malloc2", 00:05:47.797 "aliases": [ 00:05:47.797 "ea9d8c98-779a-4ef6-9435-9af2f2f69f99" 00:05:47.797 ], 00:05:47.797 "product_name": "Malloc disk", 00:05:47.797 "block_size": 512, 00:05:47.797 "num_blocks": 16384, 00:05:47.797 "uuid": "ea9d8c98-779a-4ef6-9435-9af2f2f69f99", 00:05:47.797 "assigned_rate_limits": { 00:05:47.797 "rw_ios_per_sec": 0, 00:05:47.797 "rw_mbytes_per_sec": 0, 00:05:47.797 "r_mbytes_per_sec": 0, 00:05:47.797 "w_mbytes_per_sec": 0 00:05:47.797 }, 00:05:47.797 "claimed": false, 00:05:47.797 "zoned": false, 00:05:47.797 "supported_io_types": { 00:05:47.797 "read": true, 00:05:47.797 "write": true, 00:05:47.797 "unmap": true, 00:05:47.797 "write_zeroes": true, 00:05:47.797 "flush": true, 00:05:47.797 "reset": true, 00:05:47.797 "compare": false, 00:05:47.797 "compare_and_write": false, 00:05:47.797 "abort": true, 00:05:47.797 "nvme_admin": false, 00:05:47.797 "nvme_io": false 00:05:47.797 }, 00:05:47.797 "memory_domains": [ 00:05:47.797 { 00:05:47.797 "dma_device_id": "system", 00:05:47.797 "dma_device_type": 1 00:05:47.797 }, 00:05:47.797 { 00:05:47.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.797 "dma_device_type": 2 00:05:47.797 } 00:05:47.797 ], 00:05:47.797 "driver_specific": {} 00:05:47.797 } 00:05:47.797 ]' 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 [2024-07-23 03:05:14.330415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:47.797 [2024-07-23 03:05:14.330460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.797 [2024-07-23 03:05:14.330484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1000420 00:05:47.797 [2024-07-23 03:05:14.330500] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.797 [2024-07-23 03:05:14.331870] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.797 [2024-07-23 03:05:14.331895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.797 Passthru0 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.797 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.797 { 00:05:47.797 "name": "Malloc2", 00:05:47.797 "aliases": [ 00:05:47.797 "ea9d8c98-779a-4ef6-9435-9af2f2f69f99" 00:05:47.797 ], 00:05:47.797 "product_name": "Malloc disk", 00:05:47.797 "block_size": 512, 00:05:47.797 "num_blocks": 16384, 00:05:47.797 "uuid": "ea9d8c98-779a-4ef6-9435-9af2f2f69f99", 00:05:47.797 "assigned_rate_limits": { 00:05:47.797 "rw_ios_per_sec": 0, 00:05:47.797 "rw_mbytes_per_sec": 0, 00:05:47.797 "r_mbytes_per_sec": 0, 00:05:47.797 "w_mbytes_per_sec": 0 00:05:47.797 }, 00:05:47.797 "claimed": true, 00:05:47.797 "claim_type": "exclusive_write", 00:05:47.797 "zoned": false, 00:05:47.797 "supported_io_types": { 00:05:47.797 "read": true, 00:05:47.797 "write": true, 00:05:47.797 "unmap": true, 00:05:47.797 "write_zeroes": true, 00:05:47.797 "flush": true, 00:05:47.797 "reset": true, 00:05:47.797 "compare": false, 00:05:47.797 "compare_and_write": false, 00:05:47.797 "abort": true, 00:05:47.797 "nvme_admin": false, 00:05:47.797 "nvme_io": false 00:05:47.797 }, 00:05:47.797 "memory_domains": [ 00:05:47.797 { 00:05:47.797 "dma_device_id": "system", 00:05:47.797 "dma_device_type": 1 00:05:47.798 }, 00:05:47.798 { 00:05:47.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.798 "dma_device_type": 2 00:05:47.798 } 00:05:47.798 ], 00:05:47.798 "driver_specific": {} 00:05:47.798 }, 00:05:47.798 { 00:05:47.798 "name": "Passthru0", 00:05:47.798 "aliases": [ 00:05:47.798 "02fc4209-2b4b-53f3-8d50-a2a4418eb1ab" 00:05:47.798 ], 00:05:47.798 "product_name": "passthru", 00:05:47.798 "block_size": 512, 00:05:47.798 "num_blocks": 16384, 00:05:47.798 "uuid": "02fc4209-2b4b-53f3-8d50-a2a4418eb1ab", 00:05:47.798 "assigned_rate_limits": { 00:05:47.798 "rw_ios_per_sec": 0, 00:05:47.798 "rw_mbytes_per_sec": 0, 00:05:47.798 "r_mbytes_per_sec": 0, 00:05:47.798 "w_mbytes_per_sec": 0 00:05:47.798 }, 00:05:47.798 "claimed": false, 00:05:47.798 "zoned": false, 00:05:47.798 "supported_io_types": { 00:05:47.798 "read": true, 00:05:47.798 "write": true, 00:05:47.798 "unmap": true, 00:05:47.798 "write_zeroes": true, 00:05:47.798 "flush": true, 00:05:47.798 "reset": true, 00:05:47.798 "compare": false, 00:05:47.798 "compare_and_write": false, 00:05:47.798 "abort": true, 00:05:47.798 "nvme_admin": false, 00:05:47.798 "nvme_io": false 00:05:47.798 }, 00:05:47.798 "memory_domains": [ 00:05:47.798 { 00:05:47.798 "dma_device_id": "system", 00:05:47.798 "dma_device_type": 1 00:05:47.798 }, 00:05:47.798 { 00:05:47.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.798 "dma_device_type": 2 00:05:47.798 } 00:05:47.798 ], 00:05:47.798 "driver_specific": { 00:05:47.798 "passthru": { 00:05:47.798 "name": "Passthru0", 00:05:47.798 "base_bdev_name": "Malloc2" 00:05:47.798 } 00:05:47.798 } 00:05:47.798 } 00:05:47.798 ]' 00:05:47.798 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.055 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.055 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.055 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.055 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.056 00:05:48.056 real 0m0.233s 00:05:48.056 user 0m0.153s 00:05:48.056 sys 0m0.023s 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.056 03:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.056 ************************************ 00:05:48.056 END TEST rpc_daemon_integrity 00:05:48.056 ************************************ 00:05:48.056 03:05:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.056 03:05:14 rpc -- rpc/rpc.sh@84 -- # killprocess 305360 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@946 -- # '[' -z 305360 ']' 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@950 -- # kill -0 305360 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@951 -- # uname 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 305360 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 305360' 00:05:48.056 killing process with pid 305360 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@965 -- # kill 305360 00:05:48.056 03:05:14 rpc -- common/autotest_common.sh@970 -- # wait 305360 00:05:48.622 00:05:48.622 real 0m1.910s 00:05:48.622 user 0m2.405s 00:05:48.622 sys 0m0.586s 00:05:48.622 03:05:14 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.622 03:05:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.622 ************************************ 00:05:48.622 END TEST rpc 00:05:48.622 ************************************ 00:05:48.622 03:05:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:48.622 03:05:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.622 03:05:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.622 03:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:48.622 ************************************ 00:05:48.622 START TEST skip_rpc 00:05:48.622 ************************************ 00:05:48.622 03:05:14 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:48.622 * Looking for test storage... 00:05:48.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.622 03:05:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:48.622 03:05:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:48.622 03:05:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:48.622 03:05:15 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.622 03:05:15 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.622 03:05:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.622 ************************************ 00:05:48.622 START TEST skip_rpc 00:05:48.622 ************************************ 00:05:48.622 03:05:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:48.622 03:05:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=305799 00:05:48.622 03:05:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:48.622 03:05:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.622 03:05:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:48.622 [2024-07-23 03:05:15.091085] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:48.622 [2024-07-23 03:05:15.091160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305799 ] 00:05:48.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.622 [2024-07-23 03:05:15.151518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.881 [2024-07-23 03:05:15.241472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 305799 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 305799 ']' 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 305799 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 305799 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 305799' 00:05:54.143 killing process with pid 305799 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 305799 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 305799 00:05:54.143 00:05:54.143 real 0m5.453s 00:05:54.143 user 0m5.145s 00:05:54.143 sys 0m0.311s 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.143 03:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.143 ************************************ 00:05:54.143 END TEST skip_rpc 00:05:54.143 ************************************ 00:05:54.143 03:05:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:54.143 03:05:20 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.143 03:05:20 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.143 03:05:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.143 ************************************ 00:05:54.143 START TEST skip_rpc_with_json 00:05:54.143 ************************************ 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=306486 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 306486 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 306486 ']' 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.143 03:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.143 [2024-07-23 03:05:20.597590] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:54.143 [2024-07-23 03:05:20.597708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306486 ] 00:05:54.143 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.143 [2024-07-23 03:05:20.660007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.401 [2024-07-23 03:05:20.747867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.659 [2024-07-23 03:05:21.014561] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:54.659 request: 00:05:54.659 { 00:05:54.659 "trtype": "tcp", 00:05:54.659 "method": "nvmf_get_transports", 00:05:54.659 "req_id": 1 00:05:54.659 } 00:05:54.659 Got JSON-RPC error response 00:05:54.659 response: 00:05:54.659 { 00:05:54.659 "code": -19, 00:05:54.659 "message": "No such device" 00:05:54.659 } 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.659 [2024-07-23 03:05:21.022692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.659 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.659 { 00:05:54.659 "subsystems": [ 00:05:54.659 { 00:05:54.659 "subsystem": "vfio_user_target", 00:05:54.659 "config": null 00:05:54.659 }, 00:05:54.659 { 00:05:54.659 "subsystem": "keyring", 00:05:54.659 "config": [] 00:05:54.659 }, 00:05:54.659 { 00:05:54.659 "subsystem": "iobuf", 00:05:54.659 "config": [ 00:05:54.659 { 00:05:54.659 "method": "iobuf_set_options", 00:05:54.659 "params": { 00:05:54.659 "small_pool_count": 8192, 00:05:54.659 "large_pool_count": 1024, 00:05:54.659 "small_bufsize": 8192, 00:05:54.659 "large_bufsize": 135168 00:05:54.659 } 00:05:54.659 } 00:05:54.659 ] 00:05:54.659 }, 00:05:54.659 { 00:05:54.659 "subsystem": "sock", 00:05:54.659 "config": [ 00:05:54.659 { 00:05:54.659 "method": "sock_set_default_impl", 00:05:54.659 "params": { 00:05:54.659 "impl_name": "posix" 00:05:54.659 } 00:05:54.659 }, 00:05:54.659 { 00:05:54.659 "method": "sock_impl_set_options", 00:05:54.659 "params": { 00:05:54.659 "impl_name": "ssl", 00:05:54.659 "recv_buf_size": 4096, 00:05:54.659 "send_buf_size": 4096, 00:05:54.659 "enable_recv_pipe": true, 00:05:54.659 "enable_quickack": false, 00:05:54.659 "enable_placement_id": 0, 00:05:54.659 "enable_zerocopy_send_server": true, 00:05:54.659 "enable_zerocopy_send_client": false, 00:05:54.659 "zerocopy_threshold": 0, 00:05:54.659 "tls_version": 0, 00:05:54.659 "enable_ktls": false 00:05:54.659 } 00:05:54.659 }, 00:05:54.659 { 00:05:54.659 "method": "sock_impl_set_options", 00:05:54.659 "params": { 00:05:54.659 "impl_name": "posix", 00:05:54.659 "recv_buf_size": 2097152, 00:05:54.659 "send_buf_size": 2097152, 00:05:54.659 "enable_recv_pipe": true, 00:05:54.659 "enable_quickack": false, 00:05:54.659 "enable_placement_id": 0, 00:05:54.659 "enable_zerocopy_send_server": true, 00:05:54.659 "enable_zerocopy_send_client": false, 00:05:54.659 "zerocopy_threshold": 0, 00:05:54.659 "tls_version": 0, 00:05:54.659 "enable_ktls": false 00:05:54.659 } 00:05:54.659 } 00:05:54.659 ] 00:05:54.659 }, 00:05:54.659 { 00:05:54.659 "subsystem": "vmd", 00:05:54.659 "config": [] 00:05:54.659 }, 00:05:54.659 { 00:05:54.659 "subsystem": "accel", 00:05:54.659 "config": [ 00:05:54.659 { 00:05:54.659 "method": "accel_set_options", 00:05:54.659 "params": { 00:05:54.659 "small_cache_size": 128, 00:05:54.659 "large_cache_size": 16, 00:05:54.659 "task_count": 2048, 00:05:54.659 "sequence_count": 2048, 00:05:54.659 "buf_count": 2048 00:05:54.659 } 00:05:54.659 } 00:05:54.659 ] 00:05:54.659 }, 00:05:54.659 { 00:05:54.659 "subsystem": "bdev", 00:05:54.659 "config": [ 00:05:54.659 { 00:05:54.659 "method": "bdev_set_options", 00:05:54.659 "params": { 00:05:54.659 "bdev_io_pool_size": 65535, 00:05:54.660 "bdev_io_cache_size": 256, 00:05:54.660 "bdev_auto_examine": true, 00:05:54.660 "iobuf_small_cache_size": 128, 00:05:54.660 "iobuf_large_cache_size": 16 00:05:54.660 } 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "method": "bdev_raid_set_options", 00:05:54.660 "params": { 00:05:54.660 "process_window_size_kb": 1024 00:05:54.660 } 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "method": "bdev_iscsi_set_options", 00:05:54.660 "params": { 00:05:54.660 "timeout_sec": 30 00:05:54.660 } 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "method": "bdev_nvme_set_options", 00:05:54.660 "params": { 00:05:54.660 "action_on_timeout": "none", 00:05:54.660 "timeout_us": 0, 00:05:54.660 "timeout_admin_us": 0, 00:05:54.660 "keep_alive_timeout_ms": 10000, 00:05:54.660 "arbitration_burst": 0, 00:05:54.660 "low_priority_weight": 0, 00:05:54.660 "medium_priority_weight": 0, 00:05:54.660 "high_priority_weight": 0, 00:05:54.660 "nvme_adminq_poll_period_us": 10000, 00:05:54.660 "nvme_ioq_poll_period_us": 0, 00:05:54.660 "io_queue_requests": 0, 00:05:54.660 "delay_cmd_submit": true, 00:05:54.660 "transport_retry_count": 4, 00:05:54.660 "bdev_retry_count": 3, 00:05:54.660 "transport_ack_timeout": 0, 00:05:54.660 "ctrlr_loss_timeout_sec": 0, 00:05:54.660 "reconnect_delay_sec": 0, 00:05:54.660 "fast_io_fail_timeout_sec": 0, 00:05:54.660 "disable_auto_failback": false, 00:05:54.660 "generate_uuids": false, 00:05:54.660 "transport_tos": 0, 00:05:54.660 "nvme_error_stat": false, 00:05:54.660 "rdma_srq_size": 0, 00:05:54.660 "io_path_stat": false, 00:05:54.660 "allow_accel_sequence": false, 00:05:54.660 "rdma_max_cq_size": 0, 00:05:54.660 "rdma_cm_event_timeout_ms": 0, 00:05:54.660 "dhchap_digests": [ 00:05:54.660 "sha256", 00:05:54.660 "sha384", 00:05:54.660 "sha512" 00:05:54.660 ], 00:05:54.660 "dhchap_dhgroups": [ 00:05:54.660 "null", 00:05:54.660 "ffdhe2048", 00:05:54.660 "ffdhe3072", 00:05:54.660 "ffdhe4096", 00:05:54.660 "ffdhe6144", 00:05:54.660 "ffdhe8192" 00:05:54.660 ] 00:05:54.660 } 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "method": "bdev_nvme_set_hotplug", 00:05:54.660 "params": { 00:05:54.660 "period_us": 100000, 00:05:54.660 "enable": false 00:05:54.660 } 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "method": "bdev_wait_for_examine" 00:05:54.660 } 00:05:54.660 ] 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "subsystem": "scsi", 00:05:54.660 "config": null 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "subsystem": "scheduler", 00:05:54.660 "config": [ 00:05:54.660 { 00:05:54.660 "method": "framework_set_scheduler", 00:05:54.660 "params": { 00:05:54.660 "name": "static" 00:05:54.660 } 00:05:54.660 } 00:05:54.660 ] 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "subsystem": "vhost_scsi", 00:05:54.660 "config": [] 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "subsystem": "vhost_blk", 00:05:54.660 "config": [] 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "subsystem": "ublk", 00:05:54.660 "config": [] 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "subsystem": "nbd", 00:05:54.660 "config": [] 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "subsystem": "nvmf", 00:05:54.660 "config": [ 00:05:54.660 { 00:05:54.660 "method": "nvmf_set_config", 00:05:54.660 "params": { 00:05:54.660 "discovery_filter": "match_any", 00:05:54.660 "admin_cmd_passthru": { 00:05:54.660 "identify_ctrlr": false 00:05:54.660 } 00:05:54.660 } 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "method": "nvmf_set_max_subsystems", 00:05:54.660 "params": { 00:05:54.660 "max_subsystems": 1024 00:05:54.660 } 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "method": "nvmf_set_crdt", 00:05:54.660 "params": { 00:05:54.660 "crdt1": 0, 00:05:54.660 "crdt2": 0, 00:05:54.660 "crdt3": 0 00:05:54.660 } 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "method": "nvmf_create_transport", 00:05:54.660 "params": { 00:05:54.660 "trtype": "TCP", 00:05:54.660 "max_queue_depth": 128, 00:05:54.660 "max_io_qpairs_per_ctrlr": 127, 00:05:54.660 "in_capsule_data_size": 4096, 00:05:54.660 "max_io_size": 131072, 00:05:54.660 "io_unit_size": 131072, 00:05:54.660 "max_aq_depth": 128, 00:05:54.660 "num_shared_buffers": 511, 00:05:54.660 "buf_cache_size": 4294967295, 00:05:54.660 "dif_insert_or_strip": false, 00:05:54.660 "zcopy": false, 00:05:54.660 "c2h_success": true, 00:05:54.660 "sock_priority": 0, 00:05:54.660 "abort_timeout_sec": 1, 00:05:54.660 "ack_timeout": 0, 00:05:54.660 "data_wr_pool_size": 0 00:05:54.660 } 00:05:54.660 } 00:05:54.660 ] 00:05:54.660 }, 00:05:54.660 { 00:05:54.660 "subsystem": "iscsi", 00:05:54.660 "config": [ 00:05:54.660 { 00:05:54.660 "method": "iscsi_set_options", 00:05:54.660 "params": { 00:05:54.660 "node_base": "iqn.2016-06.io.spdk", 00:05:54.660 "max_sessions": 128, 00:05:54.660 "max_connections_per_session": 2, 00:05:54.660 "max_queue_depth": 64, 00:05:54.660 "default_time2wait": 2, 00:05:54.660 "default_time2retain": 20, 00:05:54.660 "first_burst_length": 8192, 00:05:54.660 "immediate_data": true, 00:05:54.660 "allow_duplicated_isid": false, 00:05:54.660 "error_recovery_level": 0, 00:05:54.660 "nop_timeout": 60, 00:05:54.660 "nop_in_interval": 30, 00:05:54.660 "disable_chap": false, 00:05:54.660 "require_chap": false, 00:05:54.660 "mutual_chap": false, 00:05:54.660 "chap_group": 0, 00:05:54.660 "max_large_datain_per_connection": 64, 00:05:54.660 "max_r2t_per_connection": 4, 00:05:54.660 "pdu_pool_size": 36864, 00:05:54.660 "immediate_data_pool_size": 16384, 00:05:54.660 "data_out_pool_size": 2048 00:05:54.660 } 00:05:54.660 } 00:05:54.660 ] 00:05:54.660 } 00:05:54.660 ] 00:05:54.660 } 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 306486 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 306486 ']' 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 306486 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 306486 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 306486' 00:05:54.660 killing process with pid 306486 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 306486 00:05:54.660 03:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 306486 00:05:55.226 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=306626 00:05:55.226 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:55.226 03:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 306626 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 306626 ']' 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 306626 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 306626 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 306626' 00:06:00.486 killing process with pid 306626 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 306626 00:06:00.486 03:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 306626 00:06:00.486 03:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.486 03:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.486 00:06:00.486 real 0m6.516s 00:06:00.486 user 0m6.085s 00:06:00.486 sys 0m0.705s 00:06:00.486 03:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.486 03:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.745 ************************************ 00:06:00.745 END TEST skip_rpc_with_json 00:06:00.745 ************************************ 00:06:00.745 03:05:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.745 03:05:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.745 03:05:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.745 03:05:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.745 ************************************ 00:06:00.745 START TEST skip_rpc_with_delay 00:06:00.745 ************************************ 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.745 [2024-07-23 03:05:27.162913] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.745 [2024-07-23 03:05:27.163030] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.745 00:06:00.745 real 0m0.066s 00:06:00.745 user 0m0.047s 00:06:00.745 sys 0m0.019s 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.745 03:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.745 ************************************ 00:06:00.745 END TEST skip_rpc_with_delay 00:06:00.745 ************************************ 00:06:00.745 03:05:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.745 03:05:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.745 03:05:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.745 03:05:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.745 03:05:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.745 03:05:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.745 ************************************ 00:06:00.745 START TEST exit_on_failed_rpc_init 00:06:00.745 ************************************ 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=307344 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 307344 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 307344 ']' 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.745 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.745 [2024-07-23 03:05:27.280455] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:00.745 [2024-07-23 03:05:27.280561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307344 ] 00:06:00.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.004 [2024-07-23 03:05:27.343340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.004 [2024-07-23 03:05:27.432502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:01.262 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.262 [2024-07-23 03:05:27.742830] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:01.262 [2024-07-23 03:05:27.742898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307355 ] 00:06:01.262 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.262 [2024-07-23 03:05:27.806539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.520 [2024-07-23 03:05:27.902188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.520 [2024-07-23 03:05:27.902305] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:01.520 [2024-07-23 03:05:27.902327] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:01.520 [2024-07-23 03:05:27.902341] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 307344 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 307344 ']' 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 307344 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.520 03:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 307344 00:06:01.520 03:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.520 03:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.520 03:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 307344' 00:06:01.520 killing process with pid 307344 00:06:01.520 03:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 307344 00:06:01.520 03:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 307344 00:06:02.086 00:06:02.086 real 0m1.200s 00:06:02.086 user 0m1.295s 00:06:02.086 sys 0m0.464s 00:06:02.086 03:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.086 03:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.086 ************************************ 00:06:02.086 END TEST exit_on_failed_rpc_init 00:06:02.086 ************************************ 00:06:02.086 03:05:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:02.086 00:06:02.086 real 0m13.494s 00:06:02.086 user 0m12.671s 00:06:02.086 sys 0m1.677s 00:06:02.086 03:05:28 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.086 03:05:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.086 ************************************ 00:06:02.086 END TEST skip_rpc 00:06:02.086 ************************************ 00:06:02.086 03:05:28 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:02.086 03:05:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.086 03:05:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.086 03:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:02.086 ************************************ 00:06:02.086 START TEST rpc_client 00:06:02.086 ************************************ 00:06:02.086 03:05:28 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:02.086 * Looking for test storage... 00:06:02.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:02.086 03:05:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:02.086 OK 00:06:02.086 03:05:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.086 00:06:02.086 real 0m0.061s 00:06:02.086 user 0m0.027s 00:06:02.086 sys 0m0.038s 00:06:02.086 03:05:28 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.086 03:05:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.086 ************************************ 00:06:02.086 END TEST rpc_client 00:06:02.086 ************************************ 00:06:02.086 03:05:28 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:02.086 03:05:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.086 03:05:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.086 03:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:02.086 ************************************ 00:06:02.086 START TEST json_config 00:06:02.086 ************************************ 00:06:02.086 03:05:28 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:02.086 03:05:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.086 03:05:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.087 03:05:28 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.087 03:05:28 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.087 03:05:28 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.087 03:05:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.087 03:05:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.087 03:05:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.087 03:05:28 json_config -- paths/export.sh@5 -- # export PATH 00:06:02.087 03:05:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@47 -- # : 0 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.087 03:05:28 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:02.087 03:05:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:02.345 03:05:28 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.345 03:05:28 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:02.345 INFO: JSON configuration test init 00:06:02.345 03:05:28 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:02.345 03:05:28 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.345 03:05:28 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.345 03:05:28 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:02.345 03:05:28 json_config -- json_config/common.sh@9 -- # local app=target 00:06:02.345 03:05:28 json_config -- json_config/common.sh@10 -- # shift 00:06:02.345 03:05:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.345 03:05:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.345 03:05:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.345 03:05:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.345 03:05:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.345 03:05:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=307597 00:06:02.345 03:05:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:02.345 03:05:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.345 Waiting for target to run... 00:06:02.345 03:05:28 json_config -- json_config/common.sh@25 -- # waitforlisten 307597 /var/tmp/spdk_tgt.sock 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@827 -- # '[' -z 307597 ']' 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.345 03:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.345 [2024-07-23 03:05:28.715472] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:02.345 [2024-07-23 03:05:28.715547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307597 ] 00:06:02.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.604 [2024-07-23 03:05:29.048446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.604 [2024-07-23 03:05:29.112853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.217 03:05:29 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.217 03:05:29 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:03.217 03:05:29 json_config -- json_config/common.sh@26 -- # echo '' 00:06:03.217 00:06:03.217 03:05:29 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:03.217 03:05:29 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:03.217 03:05:29 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.217 03:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.217 03:05:29 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:03.217 03:05:29 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:03.217 03:05:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.217 03:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.217 03:05:29 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:03.217 03:05:29 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:03.217 03:05:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:06.501 03:05:32 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:06.501 03:05:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:06.501 03:05:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:06.501 03:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.501 03:05:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:06.501 03:05:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:06.501 03:05:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:06.501 03:05:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:06.501 03:05:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:06.501 03:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:06.501 03:05:33 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:06.501 03:05:33 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:06.501 03:05:33 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:06.501 03:05:33 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:06.501 03:05:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.501 03:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:06.759 03:05:33 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:06.759 03:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:06.759 03:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:06.759 MallocForNvmf0 00:06:06.759 03:05:33 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:06.759 03:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.017 MallocForNvmf1 00:06:07.017 03:05:33 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.017 03:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.274 [2024-07-23 03:05:33.800039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.274 03:05:33 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:07.274 03:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:07.532 03:05:34 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:07.532 03:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:07.790 03:05:34 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:07.790 03:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.048 03:05:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.048 03:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.305 [2024-07-23 03:05:34.775347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:08.305 03:05:34 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:08.305 03:05:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.305 03:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.305 03:05:34 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:08.305 03:05:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.305 03:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.305 03:05:34 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:08.305 03:05:34 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:08.305 03:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:08.563 MallocBdevForConfigChangeCheck 00:06:08.563 03:05:35 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:08.563 03:05:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.563 03:05:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.563 03:05:35 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:08.563 03:05:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.129 03:05:35 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:09.129 INFO: shutting down applications... 00:06:09.129 03:05:35 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:09.129 03:05:35 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:09.129 03:05:35 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:09.129 03:05:35 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:11.029 Calling clear_iscsi_subsystem 00:06:11.029 Calling clear_nvmf_subsystem 00:06:11.029 Calling clear_nbd_subsystem 00:06:11.029 Calling clear_ublk_subsystem 00:06:11.029 Calling clear_vhost_blk_subsystem 00:06:11.029 Calling clear_vhost_scsi_subsystem 00:06:11.029 Calling clear_bdev_subsystem 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@345 -- # break 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:11.029 03:05:37 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:11.029 03:05:37 json_config -- json_config/common.sh@31 -- # local app=target 00:06:11.029 03:05:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.029 03:05:37 json_config -- json_config/common.sh@35 -- # [[ -n 307597 ]] 00:06:11.029 03:05:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 307597 00:06:11.029 03:05:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.029 03:05:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.029 03:05:37 json_config -- json_config/common.sh@41 -- # kill -0 307597 00:06:11.029 03:05:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.596 03:05:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.596 03:05:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.596 03:05:38 json_config -- json_config/common.sh@41 -- # kill -0 307597 00:06:11.596 03:05:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.596 03:05:38 json_config -- json_config/common.sh@43 -- # break 00:06:11.596 03:05:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.596 03:05:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.596 SPDK target shutdown done 00:06:11.596 03:05:38 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:11.596 INFO: relaunching applications... 00:06:11.596 03:05:38 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.596 03:05:38 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.596 03:05:38 json_config -- json_config/common.sh@10 -- # shift 00:06:11.596 03:05:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.596 03:05:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.597 03:05:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.597 03:05:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.597 03:05:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.597 03:05:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=308786 00:06:11.597 03:05:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.597 03:05:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.597 Waiting for target to run... 00:06:11.597 03:05:38 json_config -- json_config/common.sh@25 -- # waitforlisten 308786 /var/tmp/spdk_tgt.sock 00:06:11.597 03:05:38 json_config -- common/autotest_common.sh@827 -- # '[' -z 308786 ']' 00:06:11.597 03:05:38 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.597 03:05:38 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.597 03:05:38 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.597 03:05:38 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.597 03:05:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.597 [2024-07-23 03:05:38.062050] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:11.597 [2024-07-23 03:05:38.062150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308786 ] 00:06:11.597 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.163 [2024-07-23 03:05:38.572690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.163 [2024-07-23 03:05:38.654782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.443 [2024-07-23 03:05:41.685697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.443 [2024-07-23 03:05:41.718132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:16.008 03:05:42 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.008 03:05:42 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:16.008 03:05:42 json_config -- json_config/common.sh@26 -- # echo '' 00:06:16.008 00:06:16.008 03:05:42 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:16.008 03:05:42 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:16.008 INFO: Checking if target configuration is the same... 00:06:16.008 03:05:42 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:16.008 03:05:42 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:16.008 03:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.008 + '[' 2 -ne 2 ']' 00:06:16.008 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:16.008 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:16.008 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:16.008 +++ basename /dev/fd/62 00:06:16.008 ++ mktemp /tmp/62.XXX 00:06:16.008 + tmp_file_1=/tmp/62.gEc 00:06:16.008 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:16.008 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:16.008 + tmp_file_2=/tmp/spdk_tgt_config.json.LXG 00:06:16.008 + ret=0 00:06:16.008 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.574 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.574 + diff -u /tmp/62.gEc /tmp/spdk_tgt_config.json.LXG 00:06:16.574 + echo 'INFO: JSON config files are the same' 00:06:16.574 INFO: JSON config files are the same 00:06:16.574 + rm /tmp/62.gEc /tmp/spdk_tgt_config.json.LXG 00:06:16.574 + exit 0 00:06:16.574 03:05:42 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:16.574 03:05:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:16.574 INFO: changing configuration and checking if this can be detected... 00:06:16.574 03:05:42 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:16.574 03:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:16.831 03:05:43 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:16.831 03:05:43 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:16.831 03:05:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.831 + '[' 2 -ne 2 ']' 00:06:16.831 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:16.831 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:16.831 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:16.831 +++ basename /dev/fd/62 00:06:16.831 ++ mktemp /tmp/62.XXX 00:06:16.831 + tmp_file_1=/tmp/62.ZMr 00:06:16.831 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:16.831 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:16.831 + tmp_file_2=/tmp/spdk_tgt_config.json.b9t 00:06:16.831 + ret=0 00:06:16.831 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.096 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.096 + diff -u /tmp/62.ZMr /tmp/spdk_tgt_config.json.b9t 00:06:17.096 + ret=1 00:06:17.096 + echo '=== Start of file: /tmp/62.ZMr ===' 00:06:17.096 + cat /tmp/62.ZMr 00:06:17.096 + echo '=== End of file: /tmp/62.ZMr ===' 00:06:17.096 + echo '' 00:06:17.096 + echo '=== Start of file: /tmp/spdk_tgt_config.json.b9t ===' 00:06:17.096 + cat /tmp/spdk_tgt_config.json.b9t 00:06:17.096 + echo '=== End of file: /tmp/spdk_tgt_config.json.b9t ===' 00:06:17.096 + echo '' 00:06:17.096 + rm /tmp/62.ZMr /tmp/spdk_tgt_config.json.b9t 00:06:17.096 + exit 1 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:17.096 INFO: configuration change detected. 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@317 -- # [[ -n 308786 ]] 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.096 03:05:43 json_config -- json_config/json_config.sh@323 -- # killprocess 308786 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@946 -- # '[' -z 308786 ']' 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@950 -- # kill -0 308786 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@951 -- # uname 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 308786 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 308786' 00:06:17.096 killing process with pid 308786 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@965 -- # kill 308786 00:06:17.096 03:05:43 json_config -- common/autotest_common.sh@970 -- # wait 308786 00:06:18.999 03:05:45 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.999 03:05:45 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:18.999 03:05:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.999 03:05:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.999 03:05:45 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:18.999 03:05:45 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:18.999 INFO: Success 00:06:18.999 00:06:18.999 real 0m16.663s 00:06:18.999 user 0m18.610s 00:06:18.999 sys 0m2.015s 00:06:18.999 03:05:45 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.999 03:05:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.999 ************************************ 00:06:18.999 END TEST json_config 00:06:18.999 ************************************ 00:06:18.999 03:05:45 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:18.999 03:05:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.999 03:05:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.999 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:18.999 ************************************ 00:06:18.999 START TEST json_config_extra_key 00:06:18.999 ************************************ 00:06:18.999 03:05:45 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:18.999 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.999 03:05:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:18.999 03:05:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.999 03:05:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.999 03:05:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.999 03:05:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.000 03:05:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.000 03:05:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.000 03:05:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.000 03:05:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.000 03:05:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.000 03:05:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.000 03:05:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:19.000 03:05:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.000 03:05:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:19.000 INFO: launching applications... 00:06:19.000 03:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=309834 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:19.000 Waiting for target to run... 00:06:19.000 03:05:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 309834 /var/tmp/spdk_tgt.sock 00:06:19.000 03:05:45 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 309834 ']' 00:06:19.000 03:05:45 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.000 03:05:45 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.000 03:05:45 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.000 03:05:45 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.000 03:05:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.000 [2024-07-23 03:05:45.417770] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:19.000 [2024-07-23 03:05:45.417853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309834 ] 00:06:19.000 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.257 [2024-07-23 03:05:45.755412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.257 [2024-07-23 03:05:45.819210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.821 03:05:46 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.821 03:05:46 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:19.821 00:06:19.821 03:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:19.821 INFO: shutting down applications... 00:06:19.821 03:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 309834 ]] 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 309834 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 309834 00:06:19.821 03:05:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.386 03:05:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.386 03:05:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.386 03:05:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 309834 00:06:20.386 03:05:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.386 03:05:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:20.386 03:05:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.386 03:05:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.386 SPDK target shutdown done 00:06:20.386 03:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:20.386 Success 00:06:20.386 00:06:20.386 real 0m1.524s 00:06:20.386 user 0m1.464s 00:06:20.386 sys 0m0.430s 00:06:20.386 03:05:46 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.386 03:05:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:20.386 ************************************ 00:06:20.386 END TEST json_config_extra_key 00:06:20.386 ************************************ 00:06:20.386 03:05:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.386 03:05:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.386 03:05:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.386 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.386 ************************************ 00:06:20.386 START TEST alias_rpc 00:06:20.386 ************************************ 00:06:20.386 03:05:46 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.386 * Looking for test storage... 00:06:20.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:20.386 03:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.386 03:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=310016 00:06:20.386 03:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.386 03:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 310016 00:06:20.386 03:05:46 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 310016 ']' 00:06:20.386 03:05:46 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.386 03:05:46 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.386 03:05:46 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.386 03:05:46 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.386 03:05:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.644 [2024-07-23 03:05:46.991109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:20.644 [2024-07-23 03:05:46.991203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310016 ] 00:06:20.644 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.644 [2024-07-23 03:05:47.048668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.644 [2024-07-23 03:05:47.135281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.901 03:05:47 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.901 03:05:47 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.901 03:05:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:21.159 03:05:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 310016 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 310016 ']' 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 310016 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310016 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310016' 00:06:21.159 killing process with pid 310016 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@965 -- # kill 310016 00:06:21.159 03:05:47 alias_rpc -- common/autotest_common.sh@970 -- # wait 310016 00:06:21.758 00:06:21.758 real 0m1.227s 00:06:21.758 user 0m1.303s 00:06:21.758 sys 0m0.439s 00:06:21.758 03:05:48 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.758 03:05:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.758 ************************************ 00:06:21.758 END TEST alias_rpc 00:06:21.758 ************************************ 00:06:21.758 03:05:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:21.758 03:05:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:21.758 03:05:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.758 03:05:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.758 03:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:21.758 ************************************ 00:06:21.758 START TEST spdkcli_tcp 00:06:21.758 ************************************ 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:21.758 * Looking for test storage... 00:06:21.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=310253 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:21.758 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 310253 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 310253 ']' 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.758 03:05:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.758 [2024-07-23 03:05:48.277530] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:21.758 [2024-07-23 03:05:48.277642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310253 ] 00:06:21.758 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.016 [2024-07-23 03:05:48.336784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.016 [2024-07-23 03:05:48.421854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.016 [2024-07-23 03:05:48.421858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.275 03:05:48 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.275 03:05:48 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:22.275 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=310337 00:06:22.275 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:22.275 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:22.533 [ 00:06:22.533 "bdev_malloc_delete", 00:06:22.533 "bdev_malloc_create", 00:06:22.533 "bdev_null_resize", 00:06:22.533 "bdev_null_delete", 00:06:22.533 "bdev_null_create", 00:06:22.533 "bdev_nvme_cuse_unregister", 00:06:22.533 "bdev_nvme_cuse_register", 00:06:22.533 "bdev_opal_new_user", 00:06:22.533 "bdev_opal_set_lock_state", 00:06:22.533 "bdev_opal_delete", 00:06:22.533 "bdev_opal_get_info", 00:06:22.533 "bdev_opal_create", 00:06:22.533 "bdev_nvme_opal_revert", 00:06:22.533 "bdev_nvme_opal_init", 00:06:22.533 "bdev_nvme_send_cmd", 00:06:22.533 "bdev_nvme_get_path_iostat", 00:06:22.533 "bdev_nvme_get_mdns_discovery_info", 00:06:22.533 "bdev_nvme_stop_mdns_discovery", 00:06:22.533 "bdev_nvme_start_mdns_discovery", 00:06:22.533 "bdev_nvme_set_multipath_policy", 00:06:22.533 "bdev_nvme_set_preferred_path", 00:06:22.533 "bdev_nvme_get_io_paths", 00:06:22.533 "bdev_nvme_remove_error_injection", 00:06:22.533 "bdev_nvme_add_error_injection", 00:06:22.533 "bdev_nvme_get_discovery_info", 00:06:22.533 "bdev_nvme_stop_discovery", 00:06:22.533 "bdev_nvme_start_discovery", 00:06:22.533 "bdev_nvme_get_controller_health_info", 00:06:22.533 "bdev_nvme_disable_controller", 00:06:22.533 "bdev_nvme_enable_controller", 00:06:22.533 "bdev_nvme_reset_controller", 00:06:22.533 "bdev_nvme_get_transport_statistics", 00:06:22.533 "bdev_nvme_apply_firmware", 00:06:22.533 "bdev_nvme_detach_controller", 00:06:22.533 "bdev_nvme_get_controllers", 00:06:22.533 "bdev_nvme_attach_controller", 00:06:22.533 "bdev_nvme_set_hotplug", 00:06:22.533 "bdev_nvme_set_options", 00:06:22.533 "bdev_passthru_delete", 00:06:22.533 "bdev_passthru_create", 00:06:22.533 "bdev_lvol_set_parent_bdev", 00:06:22.533 "bdev_lvol_set_parent", 00:06:22.533 "bdev_lvol_check_shallow_copy", 00:06:22.533 "bdev_lvol_start_shallow_copy", 00:06:22.533 "bdev_lvol_grow_lvstore", 00:06:22.533 "bdev_lvol_get_lvols", 00:06:22.533 "bdev_lvol_get_lvstores", 00:06:22.533 "bdev_lvol_delete", 00:06:22.533 "bdev_lvol_set_read_only", 00:06:22.533 "bdev_lvol_resize", 00:06:22.533 "bdev_lvol_decouple_parent", 00:06:22.533 "bdev_lvol_inflate", 00:06:22.533 "bdev_lvol_rename", 00:06:22.533 "bdev_lvol_clone_bdev", 00:06:22.533 "bdev_lvol_clone", 00:06:22.533 "bdev_lvol_snapshot", 00:06:22.533 "bdev_lvol_create", 00:06:22.533 "bdev_lvol_delete_lvstore", 00:06:22.533 "bdev_lvol_rename_lvstore", 00:06:22.533 "bdev_lvol_create_lvstore", 00:06:22.533 "bdev_raid_set_options", 00:06:22.533 "bdev_raid_remove_base_bdev", 00:06:22.533 "bdev_raid_add_base_bdev", 00:06:22.533 "bdev_raid_delete", 00:06:22.533 "bdev_raid_create", 00:06:22.533 "bdev_raid_get_bdevs", 00:06:22.533 "bdev_error_inject_error", 00:06:22.533 "bdev_error_delete", 00:06:22.533 "bdev_error_create", 00:06:22.533 "bdev_split_delete", 00:06:22.533 "bdev_split_create", 00:06:22.533 "bdev_delay_delete", 00:06:22.533 "bdev_delay_create", 00:06:22.533 "bdev_delay_update_latency", 00:06:22.533 "bdev_zone_block_delete", 00:06:22.533 "bdev_zone_block_create", 00:06:22.533 "blobfs_create", 00:06:22.533 "blobfs_detect", 00:06:22.533 "blobfs_set_cache_size", 00:06:22.533 "bdev_aio_delete", 00:06:22.533 "bdev_aio_rescan", 00:06:22.533 "bdev_aio_create", 00:06:22.533 "bdev_ftl_set_property", 00:06:22.533 "bdev_ftl_get_properties", 00:06:22.533 "bdev_ftl_get_stats", 00:06:22.533 "bdev_ftl_unmap", 00:06:22.533 "bdev_ftl_unload", 00:06:22.533 "bdev_ftl_delete", 00:06:22.533 "bdev_ftl_load", 00:06:22.533 "bdev_ftl_create", 00:06:22.533 "bdev_virtio_attach_controller", 00:06:22.533 "bdev_virtio_scsi_get_devices", 00:06:22.533 "bdev_virtio_detach_controller", 00:06:22.533 "bdev_virtio_blk_set_hotplug", 00:06:22.533 "bdev_iscsi_delete", 00:06:22.533 "bdev_iscsi_create", 00:06:22.533 "bdev_iscsi_set_options", 00:06:22.533 "accel_error_inject_error", 00:06:22.533 "ioat_scan_accel_module", 00:06:22.533 "dsa_scan_accel_module", 00:06:22.533 "iaa_scan_accel_module", 00:06:22.533 "vfu_virtio_create_scsi_endpoint", 00:06:22.533 "vfu_virtio_scsi_remove_target", 00:06:22.533 "vfu_virtio_scsi_add_target", 00:06:22.533 "vfu_virtio_create_blk_endpoint", 00:06:22.533 "vfu_virtio_delete_endpoint", 00:06:22.533 "keyring_file_remove_key", 00:06:22.533 "keyring_file_add_key", 00:06:22.533 "keyring_linux_set_options", 00:06:22.533 "iscsi_get_histogram", 00:06:22.533 "iscsi_enable_histogram", 00:06:22.533 "iscsi_set_options", 00:06:22.533 "iscsi_get_auth_groups", 00:06:22.533 "iscsi_auth_group_remove_secret", 00:06:22.533 "iscsi_auth_group_add_secret", 00:06:22.533 "iscsi_delete_auth_group", 00:06:22.533 "iscsi_create_auth_group", 00:06:22.533 "iscsi_set_discovery_auth", 00:06:22.533 "iscsi_get_options", 00:06:22.533 "iscsi_target_node_request_logout", 00:06:22.533 "iscsi_target_node_set_redirect", 00:06:22.533 "iscsi_target_node_set_auth", 00:06:22.533 "iscsi_target_node_add_lun", 00:06:22.533 "iscsi_get_stats", 00:06:22.533 "iscsi_get_connections", 00:06:22.533 "iscsi_portal_group_set_auth", 00:06:22.533 "iscsi_start_portal_group", 00:06:22.533 "iscsi_delete_portal_group", 00:06:22.533 "iscsi_create_portal_group", 00:06:22.533 "iscsi_get_portal_groups", 00:06:22.533 "iscsi_delete_target_node", 00:06:22.533 "iscsi_target_node_remove_pg_ig_maps", 00:06:22.533 "iscsi_target_node_add_pg_ig_maps", 00:06:22.533 "iscsi_create_target_node", 00:06:22.533 "iscsi_get_target_nodes", 00:06:22.533 "iscsi_delete_initiator_group", 00:06:22.533 "iscsi_initiator_group_remove_initiators", 00:06:22.533 "iscsi_initiator_group_add_initiators", 00:06:22.533 "iscsi_create_initiator_group", 00:06:22.533 "iscsi_get_initiator_groups", 00:06:22.533 "nvmf_set_crdt", 00:06:22.533 "nvmf_set_config", 00:06:22.533 "nvmf_set_max_subsystems", 00:06:22.533 "nvmf_stop_mdns_prr", 00:06:22.533 "nvmf_publish_mdns_prr", 00:06:22.533 "nvmf_subsystem_get_listeners", 00:06:22.533 "nvmf_subsystem_get_qpairs", 00:06:22.533 "nvmf_subsystem_get_controllers", 00:06:22.533 "nvmf_get_stats", 00:06:22.533 "nvmf_get_transports", 00:06:22.533 "nvmf_create_transport", 00:06:22.533 "nvmf_get_targets", 00:06:22.533 "nvmf_delete_target", 00:06:22.533 "nvmf_create_target", 00:06:22.533 "nvmf_subsystem_allow_any_host", 00:06:22.533 "nvmf_subsystem_remove_host", 00:06:22.533 "nvmf_subsystem_add_host", 00:06:22.533 "nvmf_ns_remove_host", 00:06:22.533 "nvmf_ns_add_host", 00:06:22.533 "nvmf_subsystem_remove_ns", 00:06:22.533 "nvmf_subsystem_add_ns", 00:06:22.533 "nvmf_subsystem_listener_set_ana_state", 00:06:22.533 "nvmf_discovery_get_referrals", 00:06:22.533 "nvmf_discovery_remove_referral", 00:06:22.533 "nvmf_discovery_add_referral", 00:06:22.533 "nvmf_subsystem_remove_listener", 00:06:22.533 "nvmf_subsystem_add_listener", 00:06:22.533 "nvmf_delete_subsystem", 00:06:22.533 "nvmf_create_subsystem", 00:06:22.533 "nvmf_get_subsystems", 00:06:22.533 "env_dpdk_get_mem_stats", 00:06:22.533 "nbd_get_disks", 00:06:22.533 "nbd_stop_disk", 00:06:22.533 "nbd_start_disk", 00:06:22.533 "ublk_recover_disk", 00:06:22.533 "ublk_get_disks", 00:06:22.533 "ublk_stop_disk", 00:06:22.533 "ublk_start_disk", 00:06:22.533 "ublk_destroy_target", 00:06:22.533 "ublk_create_target", 00:06:22.533 "virtio_blk_create_transport", 00:06:22.533 "virtio_blk_get_transports", 00:06:22.533 "vhost_controller_set_coalescing", 00:06:22.533 "vhost_get_controllers", 00:06:22.533 "vhost_delete_controller", 00:06:22.533 "vhost_create_blk_controller", 00:06:22.533 "vhost_scsi_controller_remove_target", 00:06:22.533 "vhost_scsi_controller_add_target", 00:06:22.533 "vhost_start_scsi_controller", 00:06:22.533 "vhost_create_scsi_controller", 00:06:22.533 "thread_set_cpumask", 00:06:22.533 "framework_get_scheduler", 00:06:22.533 "framework_set_scheduler", 00:06:22.533 "framework_get_reactors", 00:06:22.533 "thread_get_io_channels", 00:06:22.534 "thread_get_pollers", 00:06:22.534 "thread_get_stats", 00:06:22.534 "framework_monitor_context_switch", 00:06:22.534 "spdk_kill_instance", 00:06:22.534 "log_enable_timestamps", 00:06:22.534 "log_get_flags", 00:06:22.534 "log_clear_flag", 00:06:22.534 "log_set_flag", 00:06:22.534 "log_get_level", 00:06:22.534 "log_set_level", 00:06:22.534 "log_get_print_level", 00:06:22.534 "log_set_print_level", 00:06:22.534 "framework_enable_cpumask_locks", 00:06:22.534 "framework_disable_cpumask_locks", 00:06:22.534 "framework_wait_init", 00:06:22.534 "framework_start_init", 00:06:22.534 "scsi_get_devices", 00:06:22.534 "bdev_get_histogram", 00:06:22.534 "bdev_enable_histogram", 00:06:22.534 "bdev_set_qos_limit", 00:06:22.534 "bdev_set_qd_sampling_period", 00:06:22.534 "bdev_get_bdevs", 00:06:22.534 "bdev_reset_iostat", 00:06:22.534 "bdev_get_iostat", 00:06:22.534 "bdev_examine", 00:06:22.534 "bdev_wait_for_examine", 00:06:22.534 "bdev_set_options", 00:06:22.534 "notify_get_notifications", 00:06:22.534 "notify_get_types", 00:06:22.534 "accel_get_stats", 00:06:22.534 "accel_set_options", 00:06:22.534 "accel_set_driver", 00:06:22.534 "accel_crypto_key_destroy", 00:06:22.534 "accel_crypto_keys_get", 00:06:22.534 "accel_crypto_key_create", 00:06:22.534 "accel_assign_opc", 00:06:22.534 "accel_get_module_info", 00:06:22.534 "accel_get_opc_assignments", 00:06:22.534 "vmd_rescan", 00:06:22.534 "vmd_remove_device", 00:06:22.534 "vmd_enable", 00:06:22.534 "sock_get_default_impl", 00:06:22.534 "sock_set_default_impl", 00:06:22.534 "sock_impl_set_options", 00:06:22.534 "sock_impl_get_options", 00:06:22.534 "iobuf_get_stats", 00:06:22.534 "iobuf_set_options", 00:06:22.534 "keyring_get_keys", 00:06:22.534 "framework_get_pci_devices", 00:06:22.534 "framework_get_config", 00:06:22.534 "framework_get_subsystems", 00:06:22.534 "vfu_tgt_set_base_path", 00:06:22.534 "trace_get_info", 00:06:22.534 "trace_get_tpoint_group_mask", 00:06:22.534 "trace_disable_tpoint_group", 00:06:22.534 "trace_enable_tpoint_group", 00:06:22.534 "trace_clear_tpoint_mask", 00:06:22.534 "trace_set_tpoint_mask", 00:06:22.534 "spdk_get_version", 00:06:22.534 "rpc_get_methods" 00:06:22.534 ] 00:06:22.534 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.534 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:22.534 03:05:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 310253 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 310253 ']' 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 310253 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310253 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310253' 00:06:22.534 killing process with pid 310253 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 310253 00:06:22.534 03:05:48 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 310253 00:06:23.100 00:06:23.100 real 0m1.217s 00:06:23.100 user 0m2.173s 00:06:23.100 sys 0m0.435s 00:06:23.100 03:05:49 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.100 03:05:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.100 ************************************ 00:06:23.100 END TEST spdkcli_tcp 00:06:23.100 ************************************ 00:06:23.100 03:05:49 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.100 03:05:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.100 03:05:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.100 03:05:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.100 ************************************ 00:06:23.100 START TEST dpdk_mem_utility 00:06:23.100 ************************************ 00:06:23.100 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.100 * Looking for test storage... 00:06:23.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:23.100 03:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:23.100 03:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=310526 00:06:23.100 03:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.100 03:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 310526 00:06:23.100 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 310526 ']' 00:06:23.100 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.100 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.100 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.100 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.100 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.100 [2024-07-23 03:05:49.536041] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:23.100 [2024-07-23 03:05:49.536135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310526 ] 00:06:23.100 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.100 [2024-07-23 03:05:49.594478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.359 [2024-07-23 03:05:49.683220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.616 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.616 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:23.616 03:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:23.616 03:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:23.616 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.616 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.616 { 00:06:23.616 "filename": "/tmp/spdk_mem_dump.txt" 00:06:23.616 } 00:06:23.617 03:05:49 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.617 03:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:23.617 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:23.617 1 heaps totaling size 814.000000 MiB 00:06:23.617 size: 814.000000 MiB heap id: 0 00:06:23.617 end heaps---------- 00:06:23.617 8 mempools totaling size 598.116089 MiB 00:06:23.617 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:23.617 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:23.617 size: 84.521057 MiB name: bdev_io_310526 00:06:23.617 size: 51.011292 MiB name: evtpool_310526 00:06:23.617 size: 50.003479 MiB name: msgpool_310526 00:06:23.617 size: 21.763794 MiB name: PDU_Pool 00:06:23.617 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:23.617 size: 0.026123 MiB name: Session_Pool 00:06:23.617 end mempools------- 00:06:23.617 6 memzones totaling size 4.142822 MiB 00:06:23.617 size: 1.000366 MiB name: RG_ring_0_310526 00:06:23.617 size: 1.000366 MiB name: RG_ring_1_310526 00:06:23.617 size: 1.000366 MiB name: RG_ring_4_310526 00:06:23.617 size: 1.000366 MiB name: RG_ring_5_310526 00:06:23.617 size: 0.125366 MiB name: RG_ring_2_310526 00:06:23.617 size: 0.015991 MiB name: RG_ring_3_310526 00:06:23.617 end memzones------- 00:06:23.617 03:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:23.617 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:23.617 list of free elements. size: 12.519348 MiB 00:06:23.617 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:23.617 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:23.617 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:23.617 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:23.617 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:23.617 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:23.617 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:23.617 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:23.617 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:23.617 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:23.617 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:23.617 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:23.617 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:23.617 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:23.617 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:23.617 list of standard malloc elements. size: 199.218079 MiB 00:06:23.617 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:23.617 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:23.617 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:23.617 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:23.617 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:23.617 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:23.617 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:23.617 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:23.617 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:23.617 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:23.617 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:23.617 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:23.617 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:23.617 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:23.617 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:23.617 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:23.617 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:23.617 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:23.617 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:23.617 list of memzone associated elements. size: 602.262573 MiB 00:06:23.617 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:23.617 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:23.617 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:23.617 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:23.617 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:23.617 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_310526_0 00:06:23.617 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:23.617 associated memzone info: size: 48.002930 MiB name: MP_evtpool_310526_0 00:06:23.617 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:23.617 associated memzone info: size: 48.002930 MiB name: MP_msgpool_310526_0 00:06:23.617 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:23.617 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:23.617 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:23.617 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:23.617 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:23.617 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_310526 00:06:23.617 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:23.617 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_310526 00:06:23.617 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:23.617 associated memzone info: size: 1.007996 MiB name: MP_evtpool_310526 00:06:23.617 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:23.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:23.617 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:23.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:23.617 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:23.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:23.617 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:23.617 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:23.617 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:23.617 associated memzone info: size: 1.000366 MiB name: RG_ring_0_310526 00:06:23.617 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:23.617 associated memzone info: size: 1.000366 MiB name: RG_ring_1_310526 00:06:23.617 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:23.617 associated memzone info: size: 1.000366 MiB name: RG_ring_4_310526 00:06:23.617 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:23.617 associated memzone info: size: 1.000366 MiB name: RG_ring_5_310526 00:06:23.617 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:23.617 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_310526 00:06:23.617 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:23.617 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:23.617 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:23.617 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:23.617 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:23.617 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:23.617 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:23.617 associated memzone info: size: 0.125366 MiB name: RG_ring_2_310526 00:06:23.617 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:23.617 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:23.617 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:23.617 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:23.617 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:23.617 associated memzone info: size: 0.015991 MiB name: RG_ring_3_310526 00:06:23.617 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:23.617 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:23.617 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:23.617 associated memzone info: size: 0.000183 MiB name: MP_msgpool_310526 00:06:23.617 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:23.617 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_310526 00:06:23.617 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:23.617 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:23.617 03:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:23.617 03:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 310526 00:06:23.617 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 310526 ']' 00:06:23.617 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 310526 00:06:23.617 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:23.618 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.618 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310526 00:06:23.618 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.618 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.618 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310526' 00:06:23.618 killing process with pid 310526 00:06:23.618 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 310526 00:06:23.618 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 310526 00:06:24.183 00:06:24.183 real 0m1.076s 00:06:24.184 user 0m1.040s 00:06:24.184 sys 0m0.397s 00:06:24.184 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.184 03:05:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.184 ************************************ 00:06:24.184 END TEST dpdk_mem_utility 00:06:24.184 ************************************ 00:06:24.184 03:05:50 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:24.184 03:05:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.184 03:05:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.184 03:05:50 -- common/autotest_common.sh@10 -- # set +x 00:06:24.184 ************************************ 00:06:24.184 START TEST event 00:06:24.184 ************************************ 00:06:24.184 03:05:50 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:24.184 * Looking for test storage... 00:06:24.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:24.184 03:05:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:24.184 03:05:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:24.184 03:05:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:24.184 03:05:50 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:24.184 03:05:50 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.184 03:05:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.184 ************************************ 00:06:24.184 START TEST event_perf 00:06:24.184 ************************************ 00:06:24.184 03:05:50 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:24.184 Running I/O for 1 seconds...[2024-07-23 03:05:50.643839] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:24.184 [2024-07-23 03:05:50.643898] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310716 ] 00:06:24.184 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.184 [2024-07-23 03:05:50.704437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.442 [2024-07-23 03:05:50.799801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.442 [2024-07-23 03:05:50.799848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.442 [2024-07-23 03:05:50.799905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.442 [2024-07-23 03:05:50.799908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.374 Running I/O for 1 seconds... 00:06:25.374 lcore 0: 233193 00:06:25.374 lcore 1: 233193 00:06:25.374 lcore 2: 233192 00:06:25.374 lcore 3: 233192 00:06:25.374 done. 00:06:25.374 00:06:25.374 real 0m1.249s 00:06:25.374 user 0m4.163s 00:06:25.374 sys 0m0.081s 00:06:25.374 03:05:51 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.374 03:05:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.374 ************************************ 00:06:25.374 END TEST event_perf 00:06:25.374 ************************************ 00:06:25.374 03:05:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:25.374 03:05:51 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:25.374 03:05:51 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.374 03:05:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.374 ************************************ 00:06:25.374 START TEST event_reactor 00:06:25.374 ************************************ 00:06:25.374 03:05:51 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:25.374 [2024-07-23 03:05:51.942401] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:25.374 [2024-07-23 03:05:51.942465] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310875 ] 00:06:25.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.632 [2024-07-23 03:05:52.004073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.632 [2024-07-23 03:05:52.097182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.048 test_start 00:06:27.048 oneshot 00:06:27.048 tick 100 00:06:27.048 tick 100 00:06:27.048 tick 250 00:06:27.048 tick 100 00:06:27.048 tick 100 00:06:27.048 tick 100 00:06:27.048 tick 250 00:06:27.048 tick 500 00:06:27.048 tick 100 00:06:27.048 tick 100 00:06:27.048 tick 250 00:06:27.048 tick 100 00:06:27.048 tick 100 00:06:27.048 test_end 00:06:27.048 00:06:27.048 real 0m1.251s 00:06:27.048 user 0m1.164s 00:06:27.048 sys 0m0.081s 00:06:27.048 03:05:53 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.048 03:05:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:27.048 ************************************ 00:06:27.048 END TEST event_reactor 00:06:27.048 ************************************ 00:06:27.048 03:05:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:27.048 03:05:53 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:27.048 03:05:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.048 03:05:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.048 ************************************ 00:06:27.048 START TEST event_reactor_perf 00:06:27.048 ************************************ 00:06:27.048 03:05:53 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:27.048 [2024-07-23 03:05:53.237446] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:27.048 [2024-07-23 03:05:53.237500] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311036 ] 00:06:27.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.048 [2024-07-23 03:05:53.300178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.048 [2024-07-23 03:05:53.393168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.982 test_start 00:06:27.982 test_end 00:06:27.982 Performance: 353233 events per second 00:06:27.982 00:06:27.982 real 0m1.247s 00:06:27.982 user 0m1.158s 00:06:27.982 sys 0m0.084s 00:06:27.982 03:05:54 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.982 03:05:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.982 ************************************ 00:06:27.982 END TEST event_reactor_perf 00:06:27.982 ************************************ 00:06:27.982 03:05:54 event -- event/event.sh@49 -- # uname -s 00:06:27.982 03:05:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.982 03:05:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.982 03:05:54 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.982 03:05:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.982 03:05:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.982 ************************************ 00:06:27.982 START TEST event_scheduler 00:06:27.982 ************************************ 00:06:27.982 03:05:54 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:28.240 * Looking for test storage... 00:06:28.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:28.240 03:05:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:28.240 03:05:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=311219 00:06:28.240 03:05:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:28.240 03:05:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.240 03:05:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 311219 00:06:28.240 03:05:54 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 311219 ']' 00:06:28.240 03:05:54 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.240 03:05:54 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.240 03:05:54 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.240 03:05:54 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.240 03:05:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.240 [2024-07-23 03:05:54.618161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:28.240 [2024-07-23 03:05:54.618249] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311219 ] 00:06:28.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.240 [2024-07-23 03:05:54.679472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.240 [2024-07-23 03:05:54.768649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.240 [2024-07-23 03:05:54.768678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.240 [2024-07-23 03:05:54.768734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.240 [2024-07-23 03:05:54.768737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:28.499 03:05:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.499 POWER: Env isn't set yet! 00:06:28.499 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:28.499 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:28.499 POWER: Cannot get available frequencies of lcore 0 00:06:28.499 POWER: Attempting to initialise PSTAT power management... 00:06:28.499 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:28.499 POWER: Initialized successfully for lcore 0 power management 00:06:28.499 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:28.499 POWER: Initialized successfully for lcore 1 power management 00:06:28.499 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:28.499 POWER: Initialized successfully for lcore 2 power management 00:06:28.499 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:28.499 POWER: Initialized successfully for lcore 3 power management 00:06:28.499 [2024-07-23 03:05:54.848788] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:28.499 [2024-07-23 03:05:54.848805] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:28.499 [2024-07-23 03:05:54.848815] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.499 03:05:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.499 [2024-07-23 03:05:54.950117] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.499 03:05:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.499 03:05:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.499 ************************************ 00:06:28.499 START TEST scheduler_create_thread 00:06:28.499 ************************************ 00:06:28.499 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 2 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 3 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 4 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 5 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 6 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 7 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 8 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 9 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.500 10 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.500 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.758 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.131 03:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.132 03:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:30.132 03:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:30.132 03:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.132 03:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.064 03:05:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.064 00:06:31.064 real 0m2.616s 00:06:31.064 user 0m0.011s 00:06:31.064 sys 0m0.003s 00:06:31.064 03:05:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.064 03:05:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.064 ************************************ 00:06:31.064 END TEST scheduler_create_thread 00:06:31.064 ************************************ 00:06:31.064 03:05:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:31.064 03:05:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 311219 00:06:31.064 03:05:57 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 311219 ']' 00:06:31.064 03:05:57 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 311219 00:06:31.064 03:05:57 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:31.064 03:05:57 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.064 03:05:57 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 311219 00:06:31.322 03:05:57 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:31.322 03:05:57 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:31.322 03:05:57 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 311219' 00:06:31.322 killing process with pid 311219 00:06:31.322 03:05:57 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 311219 00:06:31.322 03:05:57 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 311219 00:06:31.580 [2024-07-23 03:05:58.077283] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:31.839 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:31.839 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:31.839 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:31.839 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:31.839 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:31.839 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:31.839 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:31.839 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:31.839 00:06:31.839 real 0m3.790s 00:06:31.839 user 0m5.719s 00:06:31.839 sys 0m0.349s 00:06:31.839 03:05:58 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.839 03:05:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.839 ************************************ 00:06:31.839 END TEST event_scheduler 00:06:31.839 ************************************ 00:06:31.839 03:05:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:31.839 03:05:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:31.839 03:05:58 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.839 03:05:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.839 03:05:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.839 ************************************ 00:06:31.839 START TEST app_repeat 00:06:31.839 ************************************ 00:06:31.839 03:05:58 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=311672 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 311672' 00:06:31.839 Process app_repeat pid: 311672 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:31.839 spdk_app_start Round 0 00:06:31.839 03:05:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 311672 /var/tmp/spdk-nbd.sock 00:06:31.839 03:05:58 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 311672 ']' 00:06:31.839 03:05:58 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.839 03:05:58 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.839 03:05:58 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.839 03:05:58 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.839 03:05:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.839 [2024-07-23 03:05:58.393988] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:31.839 [2024-07-23 03:05:58.394054] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311672 ] 00:06:32.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.098 [2024-07-23 03:05:58.458652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.098 [2024-07-23 03:05:58.551365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.098 [2024-07-23 03:05:58.551371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.098 03:05:58 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.098 03:05:58 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:32.098 03:05:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.356 Malloc0 00:06:32.356 03:05:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.615 Malloc1 00:06:32.615 03:05:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.615 03:05:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.875 /dev/nbd0 00:06:32.875 03:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.875 03:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.875 03:05:59 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.133 1+0 records in 00:06:33.133 1+0 records out 00:06:33.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196968 s, 20.8 MB/s 00:06:33.133 03:05:59 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.133 03:05:59 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:33.133 03:05:59 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.133 03:05:59 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:33.133 03:05:59 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:33.133 03:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.133 03:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.133 03:05:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.391 /dev/nbd1 00:06:33.391 03:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.391 03:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.391 1+0 records in 00:06:33.391 1+0 records out 00:06:33.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232333 s, 17.6 MB/s 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:33.391 03:05:59 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.392 03:05:59 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:33.392 03:05:59 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:33.392 03:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.392 03:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.392 03:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.392 03:05:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.392 03:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.650 03:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.650 { 00:06:33.650 "nbd_device": "/dev/nbd0", 00:06:33.650 "bdev_name": "Malloc0" 00:06:33.650 }, 00:06:33.650 { 00:06:33.650 "nbd_device": "/dev/nbd1", 00:06:33.650 "bdev_name": "Malloc1" 00:06:33.650 } 00:06:33.650 ]' 00:06:33.650 03:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.650 { 00:06:33.650 "nbd_device": "/dev/nbd0", 00:06:33.650 "bdev_name": "Malloc0" 00:06:33.650 }, 00:06:33.650 { 00:06:33.650 "nbd_device": "/dev/nbd1", 00:06:33.650 "bdev_name": "Malloc1" 00:06:33.650 } 00:06:33.651 ]' 00:06:33.651 03:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.651 /dev/nbd1' 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.651 /dev/nbd1' 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.651 256+0 records in 00:06:33.651 256+0 records out 00:06:33.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488859 s, 214 MB/s 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.651 256+0 records in 00:06:33.651 256+0 records out 00:06:33.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239679 s, 43.7 MB/s 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.651 256+0 records in 00:06:33.651 256+0 records out 00:06:33.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256844 s, 40.8 MB/s 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.651 03:06:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.909 03:06:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.167 03:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.425 03:06:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.425 03:06:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.683 03:06:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.941 [2024-07-23 03:06:01.449701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.199 [2024-07-23 03:06:01.538270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.199 [2024-07-23 03:06:01.538271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.199 [2024-07-23 03:06:01.596210] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.199 [2024-07-23 03:06:01.596295] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.725 03:06:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.725 03:06:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:37.725 spdk_app_start Round 1 00:06:37.725 03:06:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 311672 /var/tmp/spdk-nbd.sock 00:06:37.725 03:06:04 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 311672 ']' 00:06:37.725 03:06:04 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.725 03:06:04 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.725 03:06:04 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.725 03:06:04 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.725 03:06:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.982 03:06:04 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.982 03:06:04 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:37.982 03:06:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.241 Malloc0 00:06:38.241 03:06:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.499 Malloc1 00:06:38.499 03:06:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.499 03:06:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.499 03:06:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.499 03:06:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.499 03:06:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.499 03:06:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.499 03:06:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.500 03:06:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.757 /dev/nbd0 00:06:38.757 03:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.757 03:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.757 1+0 records in 00:06:38.757 1+0 records out 00:06:38.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208089 s, 19.7 MB/s 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:38.757 03:06:05 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:38.757 03:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.757 03:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.757 03:06:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.015 /dev/nbd1 00:06:39.015 03:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.015 03:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:39.015 03:06:05 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.273 1+0 records in 00:06:39.273 1+0 records out 00:06:39.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249086 s, 16.4 MB/s 00:06:39.273 03:06:05 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.273 03:06:05 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:39.273 03:06:05 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.273 03:06:05 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:39.273 03:06:05 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:39.273 03:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.273 03:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.273 03:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.273 03:06:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.273 03:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.273 03:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.273 { 00:06:39.273 "nbd_device": "/dev/nbd0", 00:06:39.273 "bdev_name": "Malloc0" 00:06:39.273 }, 00:06:39.273 { 00:06:39.273 "nbd_device": "/dev/nbd1", 00:06:39.273 "bdev_name": "Malloc1" 00:06:39.273 } 00:06:39.273 ]' 00:06:39.273 03:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.273 { 00:06:39.273 "nbd_device": "/dev/nbd0", 00:06:39.273 "bdev_name": "Malloc0" 00:06:39.273 }, 00:06:39.273 { 00:06:39.273 "nbd_device": "/dev/nbd1", 00:06:39.273 "bdev_name": "Malloc1" 00:06:39.273 } 00:06:39.273 ]' 00:06:39.273 03:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.531 /dev/nbd1' 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.531 /dev/nbd1' 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.531 256+0 records in 00:06:39.531 256+0 records out 00:06:39.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506712 s, 207 MB/s 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.531 256+0 records in 00:06:39.531 256+0 records out 00:06:39.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234249 s, 44.8 MB/s 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.531 256+0 records in 00:06:39.531 256+0 records out 00:06:39.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253493 s, 41.4 MB/s 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.531 03:06:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.532 03:06:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.532 03:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.532 03:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.532 03:06:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.532 03:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.532 03:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.789 03:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.789 03:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.789 03:06:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.789 03:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.789 03:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.790 03:06:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.790 03:06:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.790 03:06:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.790 03:06:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.790 03:06:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.047 03:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.304 03:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.304 03:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.305 03:06:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.305 03:06:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.562 03:06:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.820 [2024-07-23 03:06:07.329791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.078 [2024-07-23 03:06:07.419594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.078 [2024-07-23 03:06:07.419599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.078 [2024-07-23 03:06:07.479748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.078 [2024-07-23 03:06:07.479813] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.603 03:06:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.603 03:06:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:43.603 spdk_app_start Round 2 00:06:43.603 03:06:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 311672 /var/tmp/spdk-nbd.sock 00:06:43.603 03:06:10 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 311672 ']' 00:06:43.603 03:06:10 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.603 03:06:10 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.603 03:06:10 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.603 03:06:10 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.603 03:06:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.861 03:06:10 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.861 03:06:10 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:43.861 03:06:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.119 Malloc0 00:06:44.119 03:06:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.377 Malloc1 00:06:44.377 03:06:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.377 03:06:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.378 03:06:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.378 03:06:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.378 03:06:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.378 03:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.378 03:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.378 03:06:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.635 /dev/nbd0 00:06:44.635 03:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.635 03:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.635 03:06:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:44.635 03:06:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:44.635 03:06:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:44.635 03:06:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.636 1+0 records in 00:06:44.636 1+0 records out 00:06:44.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216956 s, 18.9 MB/s 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:44.636 03:06:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:44.636 03:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.636 03:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.636 03:06:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.894 /dev/nbd1 00:06:44.894 03:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.894 03:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.894 1+0 records in 00:06:44.894 1+0 records out 00:06:44.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205201 s, 20.0 MB/s 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:44.894 03:06:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:44.894 03:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.894 03:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.894 03:06:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.894 03:06:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.894 03:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.151 03:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.151 { 00:06:45.151 "nbd_device": "/dev/nbd0", 00:06:45.151 "bdev_name": "Malloc0" 00:06:45.151 }, 00:06:45.151 { 00:06:45.152 "nbd_device": "/dev/nbd1", 00:06:45.152 "bdev_name": "Malloc1" 00:06:45.152 } 00:06:45.152 ]' 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.152 { 00:06:45.152 "nbd_device": "/dev/nbd0", 00:06:45.152 "bdev_name": "Malloc0" 00:06:45.152 }, 00:06:45.152 { 00:06:45.152 "nbd_device": "/dev/nbd1", 00:06:45.152 "bdev_name": "Malloc1" 00:06:45.152 } 00:06:45.152 ]' 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.152 /dev/nbd1' 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.152 /dev/nbd1' 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.152 256+0 records in 00:06:45.152 256+0 records out 00:06:45.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493338 s, 213 MB/s 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.152 03:06:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.410 256+0 records in 00:06:45.410 256+0 records out 00:06:45.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236916 s, 44.3 MB/s 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.410 256+0 records in 00:06:45.410 256+0 records out 00:06:45.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255609 s, 41.0 MB/s 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.410 03:06:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.667 03:06:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.924 03:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.182 03:06:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.182 03:06:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.440 03:06:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.699 [2024-07-23 03:06:13.116291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.699 [2024-07-23 03:06:13.206395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.699 [2024-07-23 03:06:13.206400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.699 [2024-07-23 03:06:13.262092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.699 [2024-07-23 03:06:13.262164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.981 03:06:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 311672 /var/tmp/spdk-nbd.sock 00:06:49.981 03:06:15 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 311672 ']' 00:06:49.981 03:06:15 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.981 03:06:15 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.981 03:06:15 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.981 03:06:15 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.981 03:06:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:49.981 03:06:16 event.app_repeat -- event/event.sh@39 -- # killprocess 311672 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 311672 ']' 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 311672 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 311672 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 311672' 00:06:49.981 killing process with pid 311672 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@965 -- # kill 311672 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@970 -- # wait 311672 00:06:49.981 spdk_app_start is called in Round 0. 00:06:49.981 Shutdown signal received, stop current app iteration 00:06:49.981 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:49.981 spdk_app_start is called in Round 1. 00:06:49.981 Shutdown signal received, stop current app iteration 00:06:49.981 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:49.981 spdk_app_start is called in Round 2. 00:06:49.981 Shutdown signal received, stop current app iteration 00:06:49.981 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:49.981 spdk_app_start is called in Round 3. 00:06:49.981 Shutdown signal received, stop current app iteration 00:06:49.981 03:06:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:49.981 03:06:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:49.981 00:06:49.981 real 0m18.009s 00:06:49.981 user 0m39.245s 00:06:49.981 sys 0m3.229s 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.981 03:06:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.981 ************************************ 00:06:49.981 END TEST app_repeat 00:06:49.981 ************************************ 00:06:49.981 03:06:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:49.981 03:06:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:49.981 03:06:16 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.981 03:06:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.981 03:06:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.981 ************************************ 00:06:49.981 START TEST cpu_locks 00:06:49.981 ************************************ 00:06:49.981 03:06:16 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:49.981 * Looking for test storage... 00:06:49.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:49.981 03:06:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:49.981 03:06:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:49.981 03:06:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:49.981 03:06:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:49.981 03:06:16 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.981 03:06:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.981 03:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.981 ************************************ 00:06:49.981 START TEST default_locks 00:06:49.981 ************************************ 00:06:49.981 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:49.981 03:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=314741 00:06:49.981 03:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.981 03:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 314741 00:06:49.981 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 314741 ']' 00:06:49.981 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.981 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.981 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.982 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.982 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.239 [2024-07-23 03:06:16.560328] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.239 [2024-07-23 03:06:16.560433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314741 ] 00:06:50.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.239 [2024-07-23 03:06:16.620722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.239 [2024-07-23 03:06:16.705251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.499 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.499 03:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:50.499 03:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 314741 00:06:50.499 03:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 314741 00:06:50.499 03:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.756 lslocks: write error 00:06:50.756 03:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 314741 00:06:50.756 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 314741 ']' 00:06:50.756 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 314741 00:06:50.756 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:50.756 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:50.756 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 314741 00:06:51.014 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.014 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.014 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 314741' 00:06:51.014 killing process with pid 314741 00:06:51.014 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 314741 00:06:51.014 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 314741 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 314741 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 314741 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 314741 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 314741 ']' 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (314741) - No such process 00:06:51.273 ERROR: process (pid: 314741) is no longer running 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:51.273 00:06:51.273 real 0m1.240s 00:06:51.273 user 0m1.185s 00:06:51.273 sys 0m0.543s 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.273 03:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.273 ************************************ 00:06:51.273 END TEST default_locks 00:06:51.273 ************************************ 00:06:51.273 03:06:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:51.273 03:06:17 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.273 03:06:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.273 03:06:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.273 ************************************ 00:06:51.273 START TEST default_locks_via_rpc 00:06:51.273 ************************************ 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=314926 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 314926 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 314926 ']' 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.273 03:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.532 [2024-07-23 03:06:17.851369] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:51.532 [2024-07-23 03:06:17.851448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314926 ] 00:06:51.532 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.532 [2024-07-23 03:06:17.908964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.532 [2024-07-23 03:06:17.997328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.819 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.819 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 314926 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 314926 00:06:51.820 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 314926 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 314926 ']' 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 314926 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 314926 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 314926' 00:06:52.079 killing process with pid 314926 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 314926 00:06:52.079 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 314926 00:06:52.645 00:06:52.645 real 0m1.182s 00:06:52.645 user 0m1.128s 00:06:52.645 sys 0m0.512s 00:06:52.645 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.645 03:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.645 ************************************ 00:06:52.645 END TEST default_locks_via_rpc 00:06:52.645 ************************************ 00:06:52.645 03:06:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:52.645 03:06:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.645 03:06:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.645 03:06:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.645 ************************************ 00:06:52.645 START TEST non_locking_app_on_locked_coremask 00:06:52.645 ************************************ 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=315089 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 315089 /var/tmp/spdk.sock 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 315089 ']' 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:52.645 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.645 [2024-07-23 03:06:19.084288] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:52.645 [2024-07-23 03:06:19.084397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315089 ] 00:06:52.645 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.645 [2024-07-23 03:06:19.147103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.903 [2024-07-23 03:06:19.235190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.161 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=315092 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 315092 /var/tmp/spdk2.sock 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 315092 ']' 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.162 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.162 [2024-07-23 03:06:19.549640] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:53.162 [2024-07-23 03:06:19.549725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315092 ] 00:06:53.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.162 [2024-07-23 03:06:19.631006] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.162 [2024-07-23 03:06:19.631051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.420 [2024-07-23 03:06:19.814528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.987 03:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.987 03:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:53.987 03:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 315089 00:06:53.987 03:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 315089 00:06:53.987 03:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.553 lslocks: write error 00:06:54.553 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 315089 00:06:54.553 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 315089 ']' 00:06:54.553 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 315089 00:06:54.554 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:54.554 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.554 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 315089 00:06:54.554 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.554 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.554 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 315089' 00:06:54.554 killing process with pid 315089 00:06:54.554 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 315089 00:06:54.554 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 315089 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 315092 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 315092 ']' 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 315092 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 315092 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 315092' 00:06:55.488 killing process with pid 315092 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 315092 00:06:55.488 03:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 315092 00:06:56.053 00:06:56.053 real 0m3.337s 00:06:56.053 user 0m3.440s 00:06:56.053 sys 0m1.125s 00:06:56.053 03:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.053 03:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.053 ************************************ 00:06:56.053 END TEST non_locking_app_on_locked_coremask 00:06:56.053 ************************************ 00:06:56.053 03:06:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:56.053 03:06:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:56.053 03:06:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.053 03:06:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.053 ************************************ 00:06:56.053 START TEST locking_app_on_unlocked_coremask 00:06:56.053 ************************************ 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=315523 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 315523 /var/tmp/spdk.sock 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 315523 ']' 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.053 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.053 [2024-07-23 03:06:22.468296] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:56.053 [2024-07-23 03:06:22.468403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315523 ] 00:06:56.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.053 [2024-07-23 03:06:22.531596] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.053 [2024-07-23 03:06:22.531651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.053 [2024-07-23 03:06:22.620379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=315532 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 315532 /var/tmp/spdk2.sock 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 315532 ']' 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.311 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.569 [2024-07-23 03:06:22.929162] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:56.569 [2024-07-23 03:06:22.929257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315532 ] 00:06:56.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.569 [2024-07-23 03:06:23.024981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.827 [2024-07-23 03:06:23.210092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.393 03:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.393 03:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:57.393 03:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 315532 00:06:57.393 03:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 315532 00:06:57.393 03:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.959 lslocks: write error 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 315523 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 315523 ']' 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 315523 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 315523 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 315523' 00:06:57.959 killing process with pid 315523 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 315523 00:06:57.959 03:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 315523 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 315532 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 315532 ']' 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 315532 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 315532 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 315532' 00:06:58.893 killing process with pid 315532 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 315532 00:06:58.893 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 315532 00:06:59.151 00:06:59.151 real 0m3.172s 00:06:59.151 user 0m3.279s 00:06:59.151 sys 0m1.056s 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.151 ************************************ 00:06:59.151 END TEST locking_app_on_unlocked_coremask 00:06:59.151 ************************************ 00:06:59.151 03:06:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.151 03:06:25 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.151 03:06:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.151 03:06:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.151 ************************************ 00:06:59.151 START TEST locking_app_on_locked_coremask 00:06:59.151 ************************************ 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=315957 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 315957 /var/tmp/spdk.sock 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 315957 ']' 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.151 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.151 [2024-07-23 03:06:25.684729] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:59.151 [2024-07-23 03:06:25.684813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315957 ] 00:06:59.151 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.409 [2024-07-23 03:06:25.748148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.409 [2024-07-23 03:06:25.838639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=315966 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 315966 /var/tmp/spdk2.sock 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 315966 /var/tmp/spdk2.sock 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 315966 /var/tmp/spdk2.sock 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 315966 ']' 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.668 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.668 [2024-07-23 03:06:26.145920] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:59.668 [2024-07-23 03:06:26.146013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315966 ] 00:06:59.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.931 [2024-07-23 03:06:26.244182] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 315957 has claimed it. 00:06:59.931 [2024-07-23 03:06:26.244253] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (315966) - No such process 00:07:00.497 ERROR: process (pid: 315966) is no longer running 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 315957 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 315957 00:07:00.497 03:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.755 lslocks: write error 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 315957 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 315957 ']' 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 315957 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 315957 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 315957' 00:07:00.755 killing process with pid 315957 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 315957 00:07:00.755 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 315957 00:07:01.321 00:07:01.321 real 0m2.035s 00:07:01.321 user 0m2.154s 00:07:01.321 sys 0m0.655s 00:07:01.321 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.321 03:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.321 ************************************ 00:07:01.321 END TEST locking_app_on_locked_coremask 00:07:01.321 ************************************ 00:07:01.321 03:06:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:01.321 03:06:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.321 03:06:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.321 03:06:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.321 ************************************ 00:07:01.321 START TEST locking_overlapped_coremask 00:07:01.321 ************************************ 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=316147 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 316147 /var/tmp/spdk.sock 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 316147 ']' 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.321 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.321 [2024-07-23 03:06:27.774796] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.321 [2024-07-23 03:06:27.774893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316147 ] 00:07:01.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.321 [2024-07-23 03:06:27.840422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.579 [2024-07-23 03:06:27.933381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.579 [2024-07-23 03:06:27.933459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.579 [2024-07-23 03:06:27.933461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.836 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=316266 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 316266 /var/tmp/spdk2.sock 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 316266 /var/tmp/spdk2.sock 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 316266 /var/tmp/spdk2.sock 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 316266 ']' 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.837 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.837 [2024-07-23 03:06:28.235141] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.837 [2024-07-23 03:06:28.235238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316266 ] 00:07:01.837 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.837 [2024-07-23 03:06:28.323725] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 316147 has claimed it. 00:07:01.837 [2024-07-23 03:06:28.323792] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (316266) - No such process 00:07:02.402 ERROR: process (pid: 316266) is no longer running 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 316147 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 316147 ']' 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 316147 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 316147 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 316147' 00:07:02.402 killing process with pid 316147 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 316147 00:07:02.402 03:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 316147 00:07:02.968 00:07:02.968 real 0m1.611s 00:07:02.968 user 0m4.305s 00:07:02.968 sys 0m0.476s 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.968 ************************************ 00:07:02.968 END TEST locking_overlapped_coremask 00:07:02.968 ************************************ 00:07:02.968 03:06:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.968 03:06:29 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.968 03:06:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.968 03:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.968 ************************************ 00:07:02.968 START TEST locking_overlapped_coremask_via_rpc 00:07:02.968 ************************************ 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=316431 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 316431 /var/tmp/spdk.sock 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 316431 ']' 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.968 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.968 [2024-07-23 03:06:29.441592] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.968 [2024-07-23 03:06:29.441713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316431 ] 00:07:02.968 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.968 [2024-07-23 03:06:29.504551] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.968 [2024-07-23 03:06:29.504605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.227 [2024-07-23 03:06:29.594628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.227 [2024-07-23 03:06:29.594679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.227 [2024-07-23 03:06:29.594682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.485 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=316442 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 316442 /var/tmp/spdk2.sock 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 316442 ']' 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.486 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.486 [2024-07-23 03:06:29.903161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.486 [2024-07-23 03:06:29.903259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316442 ] 00:07:03.486 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.486 [2024-07-23 03:06:29.989858] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.486 [2024-07-23 03:06:29.989906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.744 [2024-07-23 03:06:30.176257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.744 [2024-07-23 03:06:30.179673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:03.744 [2024-07-23 03:06:30.179675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.311 [2024-07-23 03:06:30.866720] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 316431 has claimed it. 00:07:04.311 request: 00:07:04.311 { 00:07:04.311 "method": "framework_enable_cpumask_locks", 00:07:04.311 "req_id": 1 00:07:04.311 } 00:07:04.311 Got JSON-RPC error response 00:07:04.311 response: 00:07:04.311 { 00:07:04.311 "code": -32603, 00:07:04.311 "message": "Failed to claim CPU core: 2" 00:07:04.311 } 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 316431 /var/tmp/spdk.sock 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 316431 ']' 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.311 03:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 316442 /var/tmp/spdk2.sock 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 316442 ']' 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.569 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.826 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.826 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:04.826 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:04.826 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:04.826 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:04.826 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:04.826 00:07:04.826 real 0m1.978s 00:07:04.826 user 0m1.020s 00:07:04.826 sys 0m0.177s 00:07:04.826 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.826 03:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.826 ************************************ 00:07:04.826 END TEST locking_overlapped_coremask_via_rpc 00:07:04.826 ************************************ 00:07:04.826 03:06:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:04.826 03:06:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 316431 ]] 00:07:04.826 03:06:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 316431 00:07:04.826 03:06:31 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 316431 ']' 00:07:04.826 03:06:31 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 316431 00:07:04.826 03:06:31 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:04.826 03:06:31 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:04.826 03:06:31 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 316431 00:07:05.089 03:06:31 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.089 03:06:31 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.089 03:06:31 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 316431' 00:07:05.089 killing process with pid 316431 00:07:05.089 03:06:31 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 316431 00:07:05.089 03:06:31 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 316431 00:07:05.386 03:06:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 316442 ]] 00:07:05.386 03:06:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 316442 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 316442 ']' 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 316442 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 316442 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 316442' 00:07:05.386 killing process with pid 316442 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 316442 00:07:05.386 03:06:31 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 316442 00:07:05.953 03:06:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:05.953 03:06:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:05.953 03:06:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 316431 ]] 00:07:05.953 03:06:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 316431 00:07:05.953 03:06:32 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 316431 ']' 00:07:05.953 03:06:32 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 316431 00:07:05.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (316431) - No such process 00:07:05.953 03:06:32 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 316431 is not found' 00:07:05.953 Process with pid 316431 is not found 00:07:05.953 03:06:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 316442 ]] 00:07:05.953 03:06:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 316442 00:07:05.953 03:06:32 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 316442 ']' 00:07:05.953 03:06:32 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 316442 00:07:05.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (316442) - No such process 00:07:05.953 03:06:32 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 316442 is not found' 00:07:05.953 Process with pid 316442 is not found 00:07:05.953 03:06:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:05.953 00:07:05.953 real 0m15.837s 00:07:05.953 user 0m27.333s 00:07:05.953 sys 0m5.437s 00:07:05.953 03:06:32 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.953 03:06:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.953 ************************************ 00:07:05.953 END TEST cpu_locks 00:07:05.953 ************************************ 00:07:05.953 00:07:05.953 real 0m41.735s 00:07:05.953 user 1m18.926s 00:07:05.953 sys 0m9.492s 00:07:05.953 03:06:32 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.953 03:06:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.953 ************************************ 00:07:05.953 END TEST event 00:07:05.953 ************************************ 00:07:05.953 03:06:32 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:05.953 03:06:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.953 03:06:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.953 03:06:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.953 ************************************ 00:07:05.953 START TEST thread 00:07:05.953 ************************************ 00:07:05.953 03:06:32 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:05.953 * Looking for test storage... 00:07:05.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:05.953 03:06:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:05.953 03:06:32 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:05.953 03:06:32 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.953 03:06:32 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.953 ************************************ 00:07:05.953 START TEST thread_poller_perf 00:07:05.953 ************************************ 00:07:05.953 03:06:32 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:05.953 [2024-07-23 03:06:32.417393] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:05.953 [2024-07-23 03:06:32.417458] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316927 ] 00:07:05.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.953 [2024-07-23 03:06:32.474611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.212 [2024-07-23 03:06:32.565184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.212 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:07.144 ====================================== 00:07:07.144 busy:2710414341 (cyc) 00:07:07.144 total_run_count: 292000 00:07:07.144 tsc_hz: 2700000000 (cyc) 00:07:07.144 ====================================== 00:07:07.144 poller_cost: 9282 (cyc), 3437 (nsec) 00:07:07.144 00:07:07.144 real 0m1.246s 00:07:07.144 user 0m1.161s 00:07:07.144 sys 0m0.080s 00:07:07.144 03:06:33 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.144 03:06:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.144 ************************************ 00:07:07.144 END TEST thread_poller_perf 00:07:07.144 ************************************ 00:07:07.144 03:06:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.144 03:06:33 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:07.144 03:06:33 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.144 03:06:33 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.144 ************************************ 00:07:07.144 START TEST thread_poller_perf 00:07:07.144 ************************************ 00:07:07.144 03:06:33 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.144 [2024-07-23 03:06:33.716903] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:07.144 [2024-07-23 03:06:33.716967] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317088 ] 00:07:07.402 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.402 [2024-07-23 03:06:33.782091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.402 [2024-07-23 03:06:33.873195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.402 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:08.777 ====================================== 00:07:08.777 busy:2703007208 (cyc) 00:07:08.777 total_run_count: 3858000 00:07:08.777 tsc_hz: 2700000000 (cyc) 00:07:08.777 ====================================== 00:07:08.777 poller_cost: 700 (cyc), 259 (nsec) 00:07:08.777 00:07:08.777 real 0m1.249s 00:07:08.777 user 0m1.163s 00:07:08.777 sys 0m0.080s 00:07:08.777 03:06:34 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.777 03:06:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.777 ************************************ 00:07:08.777 END TEST thread_poller_perf 00:07:08.777 ************************************ 00:07:08.777 03:06:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:08.777 00:07:08.777 real 0m2.638s 00:07:08.777 user 0m2.402s 00:07:08.777 sys 0m0.235s 00:07:08.777 03:06:34 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.777 03:06:34 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.777 ************************************ 00:07:08.777 END TEST thread 00:07:08.777 ************************************ 00:07:08.777 03:06:34 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:08.777 03:06:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.777 03:06:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.777 03:06:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.777 ************************************ 00:07:08.777 START TEST accel 00:07:08.777 ************************************ 00:07:08.777 03:06:35 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:08.777 * Looking for test storage... 00:07:08.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:08.777 03:06:35 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:08.777 03:06:35 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:08.777 03:06:35 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:08.777 03:06:35 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=317282 00:07:08.777 03:06:35 accel -- accel/accel.sh@63 -- # waitforlisten 317282 00:07:08.777 03:06:35 accel -- common/autotest_common.sh@827 -- # '[' -z 317282 ']' 00:07:08.777 03:06:35 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.777 03:06:35 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:08.777 03:06:35 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:08.777 03:06:35 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.777 03:06:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.777 03:06:35 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.777 03:06:35 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.777 03:06:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.777 03:06:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.777 03:06:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.777 03:06:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.777 03:06:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.777 03:06:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:08.777 03:06:35 accel -- accel/accel.sh@41 -- # jq -r . 00:07:08.777 [2024-07-23 03:06:35.126847] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:08.777 [2024-07-23 03:06:35.126948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317282 ] 00:07:08.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.777 [2024-07-23 03:06:35.185846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.777 [2024-07-23 03:06:35.275579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@860 -- # return 0 00:07:09.036 03:06:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:09.036 03:06:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:09.036 03:06:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:09.036 03:06:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:09.036 03:06:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:09.036 03:06:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.036 03:06:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.036 03:06:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.036 03:06:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.036 03:06:35 accel -- accel/accel.sh@75 -- # killprocess 317282 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@946 -- # '[' -z 317282 ']' 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@950 -- # kill -0 317282 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@951 -- # uname 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 317282 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 317282' 00:07:09.036 killing process with pid 317282 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@965 -- # kill 317282 00:07:09.036 03:06:35 accel -- common/autotest_common.sh@970 -- # wait 317282 00:07:09.603 03:06:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:09.603 03:06:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:09.603 03:06:35 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:09.603 03:06:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.603 03:06:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.603 03:06:36 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:09.603 03:06:36 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:09.603 03:06:36 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.603 03:06:36 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:09.603 03:06:36 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:09.603 03:06:36 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:09.603 03:06:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.603 03:06:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.603 ************************************ 00:07:09.603 START TEST accel_missing_filename 00:07:09.603 ************************************ 00:07:09.603 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:09.603 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:09.603 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:09.603 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:09.603 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.603 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:09.603 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.603 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:09.603 03:06:36 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:09.603 [2024-07-23 03:06:36.092995] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.603 [2024-07-23 03:06:36.093061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317451 ] 00:07:09.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.603 [2024-07-23 03:06:36.155047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.862 [2024-07-23 03:06:36.248897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.862 [2024-07-23 03:06:36.310646] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.862 [2024-07-23 03:06:36.396970] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:10.120 A filename is required. 00:07:10.120 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:10.120 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.120 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:10.120 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:10.121 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:10.121 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.121 00:07:10.121 real 0m0.404s 00:07:10.121 user 0m0.288s 00:07:10.121 sys 0m0.149s 00:07:10.121 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.121 03:06:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:10.121 ************************************ 00:07:10.121 END TEST accel_missing_filename 00:07:10.121 ************************************ 00:07:10.121 03:06:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.121 03:06:36 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:10.121 03:06:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.121 03:06:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.121 ************************************ 00:07:10.121 START TEST accel_compress_verify 00:07:10.121 ************************************ 00:07:10.121 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.121 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:10.121 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.121 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.121 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.121 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.121 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.121 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:10.121 03:06:36 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:10.121 [2024-07-23 03:06:36.543930] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:10.121 [2024-07-23 03:06:36.544007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317473 ] 00:07:10.121 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.121 [2024-07-23 03:06:36.608755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.378 [2024-07-23 03:06:36.703253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.379 [2024-07-23 03:06:36.763733] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.379 [2024-07-23 03:06:36.843578] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:10.379 00:07:10.379 Compression does not support the verify option, aborting. 00:07:10.379 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:10.379 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.379 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:10.379 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:10.379 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:10.379 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.379 00:07:10.379 real 0m0.396s 00:07:10.379 user 0m0.281s 00:07:10.379 sys 0m0.149s 00:07:10.379 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.379 03:06:36 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:10.379 ************************************ 00:07:10.379 END TEST accel_compress_verify 00:07:10.379 ************************************ 00:07:10.379 03:06:36 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:10.379 03:06:36 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:10.379 03:06:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.379 03:06:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.637 ************************************ 00:07:10.637 START TEST accel_wrong_workload 00:07:10.637 ************************************ 00:07:10.637 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:10.637 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:10.637 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:10.637 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.637 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.637 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.637 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.637 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:10.637 03:06:36 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:10.637 Unsupported workload type: foobar 00:07:10.637 [2024-07-23 03:06:36.979110] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:10.637 accel_perf options: 00:07:10.637 [-h help message] 00:07:10.637 [-q queue depth per core] 00:07:10.637 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:10.637 [-T number of threads per core 00:07:10.637 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:10.637 [-t time in seconds] 00:07:10.637 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:10.637 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:10.637 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:10.637 [-l for compress/decompress workloads, name of uncompressed input file 00:07:10.637 [-S for crc32c workload, use this seed value (default 0) 00:07:10.637 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:10.637 [-f for fill workload, use this BYTE value (default 255) 00:07:10.638 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:10.638 [-y verify result if this switch is on] 00:07:10.638 [-a tasks to allocate per core (default: same value as -q)] 00:07:10.638 Can be used to spread operations across a wider range of memory. 00:07:10.638 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:10.638 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.638 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.638 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.638 00:07:10.638 real 0m0.020s 00:07:10.638 user 0m0.010s 00:07:10.638 sys 0m0.010s 00:07:10.638 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.638 03:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:10.638 ************************************ 00:07:10.638 END TEST accel_wrong_workload 00:07:10.638 ************************************ 00:07:10.638 Error: writing output failed: Broken pipe 00:07:10.638 03:06:36 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:10.638 03:06:36 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:10.638 03:06:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.638 03:06:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.638 ************************************ 00:07:10.638 START TEST accel_negative_buffers 00:07:10.638 ************************************ 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:10.638 03:06:37 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:10.638 -x option must be non-negative. 00:07:10.638 [2024-07-23 03:06:37.037312] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:10.638 accel_perf options: 00:07:10.638 [-h help message] 00:07:10.638 [-q queue depth per core] 00:07:10.638 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:10.638 [-T number of threads per core 00:07:10.638 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:10.638 [-t time in seconds] 00:07:10.638 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:10.638 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:10.638 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:10.638 [-l for compress/decompress workloads, name of uncompressed input file 00:07:10.638 [-S for crc32c workload, use this seed value (default 0) 00:07:10.638 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:10.638 [-f for fill workload, use this BYTE value (default 255) 00:07:10.638 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:10.638 [-y verify result if this switch is on] 00:07:10.638 [-a tasks to allocate per core (default: same value as -q)] 00:07:10.638 Can be used to spread operations across a wider range of memory. 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.638 00:07:10.638 real 0m0.021s 00:07:10.638 user 0m0.011s 00:07:10.638 sys 0m0.011s 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.638 03:06:37 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:10.638 ************************************ 00:07:10.638 END TEST accel_negative_buffers 00:07:10.638 ************************************ 00:07:10.638 Error: writing output failed: Broken pipe 00:07:10.638 03:06:37 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:10.638 03:06:37 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:10.638 03:06:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.638 03:06:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.638 ************************************ 00:07:10.638 START TEST accel_crc32c 00:07:10.638 ************************************ 00:07:10.638 03:06:37 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:10.638 03:06:37 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:10.638 [2024-07-23 03:06:37.105748] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:10.638 [2024-07-23 03:06:37.105807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317658 ] 00:07:10.638 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.638 [2024-07-23 03:06:37.166546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.897 [2024-07-23 03:06:37.264673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.897 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.898 03:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.898 03:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.898 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.898 03:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.270 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.271 03:06:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:12.271 03:06:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.271 00:07:12.271 real 0m1.403s 00:07:12.271 user 0m1.259s 00:07:12.271 sys 0m0.148s 00:07:12.271 03:06:38 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.271 03:06:38 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:12.271 ************************************ 00:07:12.271 END TEST accel_crc32c 00:07:12.271 ************************************ 00:07:12.271 03:06:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:12.271 03:06:38 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:12.271 03:06:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.271 03:06:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.271 ************************************ 00:07:12.271 START TEST accel_crc32c_C2 00:07:12.271 ************************************ 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:12.271 [2024-07-23 03:06:38.559986] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:12.271 [2024-07-23 03:06:38.560048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317816 ] 00:07:12.271 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.271 [2024-07-23 03:06:38.623885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.271 [2024-07-23 03:06:38.714290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.271 03:06:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.644 00:07:13.644 real 0m1.407s 00:07:13.644 user 0m1.266s 00:07:13.644 sys 0m0.144s 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.644 03:06:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:13.644 ************************************ 00:07:13.644 END TEST accel_crc32c_C2 00:07:13.644 ************************************ 00:07:13.644 03:06:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:13.644 03:06:39 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:13.644 03:06:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.644 03:06:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.644 ************************************ 00:07:13.644 START TEST accel_copy 00:07:13.644 ************************************ 00:07:13.644 03:06:39 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.644 03:06:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.645 03:06:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.645 03:06:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:13.645 03:06:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:13.645 [2024-07-23 03:06:40.011870] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.645 [2024-07-23 03:06:40.011955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317973 ] 00:07:13.645 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.645 [2024-07-23 03:06:40.076228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.645 [2024-07-23 03:06:40.169413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.903 03:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:14.836 03:06:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.836 00:07:14.836 real 0m1.398s 00:07:14.836 user 0m1.257s 00:07:14.836 sys 0m0.143s 00:07:14.836 03:06:41 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.836 03:06:41 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.836 ************************************ 00:07:14.836 END TEST accel_copy 00:07:14.836 ************************************ 00:07:15.095 03:06:41 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.095 03:06:41 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:15.095 03:06:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.095 03:06:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.095 ************************************ 00:07:15.095 START TEST accel_fill 00:07:15.095 ************************************ 00:07:15.095 03:06:41 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:15.095 [2024-07-23 03:06:41.454211] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:15.095 [2024-07-23 03:06:41.454276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318231 ] 00:07:15.095 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.095 [2024-07-23 03:06:41.516772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.095 [2024-07-23 03:06:41.607714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.095 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 03:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:16.286 03:06:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.287 00:07:16.287 real 0m1.387s 00:07:16.287 user 0m1.257s 00:07:16.287 sys 0m0.132s 00:07:16.287 03:06:42 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.287 03:06:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:16.287 ************************************ 00:07:16.287 END TEST accel_fill 00:07:16.287 ************************************ 00:07:16.287 03:06:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:16.287 03:06:42 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:16.287 03:06:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.287 03:06:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.546 ************************************ 00:07:16.546 START TEST accel_copy_crc32c 00:07:16.546 ************************************ 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:16.546 03:06:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:16.546 [2024-07-23 03:06:42.885458] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:16.546 [2024-07-23 03:06:42.885522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318403 ] 00:07:16.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.546 [2024-07-23 03:06:42.947884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.546 [2024-07-23 03:06:43.039345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.546 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.547 03:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.922 00:07:17.922 real 0m1.400s 00:07:17.922 user 0m1.264s 00:07:17.922 sys 0m0.138s 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.922 03:06:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:17.922 ************************************ 00:07:17.922 END TEST accel_copy_crc32c 00:07:17.922 ************************************ 00:07:17.922 03:06:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:17.922 03:06:44 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:17.922 03:06:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.922 03:06:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.922 ************************************ 00:07:17.922 START TEST accel_copy_crc32c_C2 00:07:17.922 ************************************ 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.922 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:17.922 [2024-07-23 03:06:44.327517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:17.922 [2024-07-23 03:06:44.327580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318561 ] 00:07:17.922 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.922 [2024-07-23 03:06:44.389532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.922 [2024-07-23 03:06:44.482347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.180 03:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.556 00:07:19.556 real 0m1.404s 00:07:19.556 user 0m1.264s 00:07:19.556 sys 0m0.142s 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.556 03:06:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:19.556 ************************************ 00:07:19.556 END TEST accel_copy_crc32c_C2 00:07:19.556 ************************************ 00:07:19.556 03:06:45 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:19.556 03:06:45 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:19.556 03:06:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.556 03:06:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.556 ************************************ 00:07:19.556 START TEST accel_dualcast 00:07:19.556 ************************************ 00:07:19.556 03:06:45 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:19.556 [2024-07-23 03:06:45.776009] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:19.556 [2024-07-23 03:06:45.776072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318712 ] 00:07:19.556 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.556 [2024-07-23 03:06:45.838804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.556 [2024-07-23 03:06:45.930430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.556 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.557 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.557 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.557 03:06:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.557 03:06:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.557 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.557 03:06:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:20.932 03:06:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.932 00:07:20.932 real 0m1.397s 00:07:20.932 user 0m1.260s 00:07:20.932 sys 0m0.138s 00:07:20.932 03:06:47 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.932 03:06:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:20.932 ************************************ 00:07:20.932 END TEST accel_dualcast 00:07:20.932 ************************************ 00:07:20.932 03:06:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:20.932 03:06:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:20.932 03:06:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.932 03:06:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.932 ************************************ 00:07:20.932 START TEST accel_compare 00:07:20.932 ************************************ 00:07:20.932 03:06:47 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:20.932 03:06:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:20.932 [2024-07-23 03:06:47.213764] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.932 [2024-07-23 03:06:47.213821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318986 ] 00:07:20.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.933 [2024-07-23 03:06:47.275484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.933 [2024-07-23 03:06:47.365223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.933 03:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:22.307 03:06:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.307 00:07:22.307 real 0m1.399s 00:07:22.307 user 0m1.265s 00:07:22.307 sys 0m0.135s 00:07:22.307 03:06:48 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.307 03:06:48 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:22.307 ************************************ 00:07:22.307 END TEST accel_compare 00:07:22.307 ************************************ 00:07:22.307 03:06:48 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:22.307 03:06:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:22.307 03:06:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.307 03:06:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.307 ************************************ 00:07:22.307 START TEST accel_xor 00:07:22.307 ************************************ 00:07:22.307 03:06:48 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:22.307 [2024-07-23 03:06:48.657219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:22.307 [2024-07-23 03:06:48.657280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319144 ] 00:07:22.307 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.307 [2024-07-23 03:06:48.719273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.307 [2024-07-23 03:06:48.810899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.307 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.308 03:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:23.681 03:06:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.681 00:07:23.681 real 0m1.406s 00:07:23.681 user 0m1.262s 00:07:23.681 sys 0m0.145s 00:07:23.681 03:06:50 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.681 03:06:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:23.681 ************************************ 00:07:23.681 END TEST accel_xor 00:07:23.681 ************************************ 00:07:23.681 03:06:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:23.681 03:06:50 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:23.681 03:06:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.681 03:06:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.682 ************************************ 00:07:23.682 START TEST accel_xor 00:07:23.682 ************************************ 00:07:23.682 03:06:50 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:23.682 03:06:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:23.682 [2024-07-23 03:06:50.108833] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:23.682 [2024-07-23 03:06:50.108911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319304 ] 00:07:23.682 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.682 [2024-07-23 03:06:50.170749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.940 [2024-07-23 03:06:50.264856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.940 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 03:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:25.358 03:06:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.358 00:07:25.358 real 0m1.409s 00:07:25.358 user 0m1.267s 00:07:25.358 sys 0m0.143s 00:07:25.358 03:06:51 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.358 03:06:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:25.358 ************************************ 00:07:25.358 END TEST accel_xor 00:07:25.358 ************************************ 00:07:25.358 03:06:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:25.358 03:06:51 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:25.358 03:06:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.358 03:06:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.358 ************************************ 00:07:25.358 START TEST accel_dif_verify 00:07:25.358 ************************************ 00:07:25.358 03:06:51 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:25.358 [2024-07-23 03:06:51.566139] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:25.358 [2024-07-23 03:06:51.566200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319460 ] 00:07:25.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.358 [2024-07-23 03:06:51.630070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.358 [2024-07-23 03:06:51.721309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.358 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 03:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:26.734 03:06:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.734 00:07:26.734 real 0m1.397s 00:07:26.734 user 0m1.247s 00:07:26.734 sys 0m0.153s 00:07:26.734 03:06:52 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.734 03:06:52 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:26.734 ************************************ 00:07:26.734 END TEST accel_dif_verify 00:07:26.734 ************************************ 00:07:26.734 03:06:52 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:26.734 03:06:52 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:26.734 03:06:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.734 03:06:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.734 ************************************ 00:07:26.734 START TEST accel_dif_generate 00:07:26.734 ************************************ 00:07:26.734 03:06:52 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:26.734 03:06:52 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:26.734 [2024-07-23 03:06:53.012322] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:26.734 [2024-07-23 03:06:53.012386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319736 ] 00:07:26.734 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.734 [2024-07-23 03:06:53.073022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.734 [2024-07-23 03:06:53.163296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.734 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.735 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.735 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.735 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.735 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.735 03:06:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.735 03:06:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.735 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.735 03:06:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:28.108 03:06:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.108 00:07:28.108 real 0m1.391s 00:07:28.108 user 0m1.244s 00:07:28.108 sys 0m0.151s 00:07:28.108 03:06:54 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.108 03:06:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:28.108 ************************************ 00:07:28.108 END TEST accel_dif_generate 00:07:28.108 ************************************ 00:07:28.108 03:06:54 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:28.108 03:06:54 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:28.108 03:06:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.108 03:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.108 ************************************ 00:07:28.108 START TEST accel_dif_generate_copy 00:07:28.108 ************************************ 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:28.108 [2024-07-23 03:06:54.450764] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:28.108 [2024-07-23 03:06:54.450824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319894 ] 00:07:28.108 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.108 [2024-07-23 03:06:54.514654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.108 [2024-07-23 03:06:54.608186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.108 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.109 03:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.483 00:07:29.483 real 0m1.408s 00:07:29.483 user 0m1.264s 00:07:29.483 sys 0m0.145s 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.483 03:06:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.483 ************************************ 00:07:29.483 END TEST accel_dif_generate_copy 00:07:29.483 ************************************ 00:07:29.483 03:06:55 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:29.483 03:06:55 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.483 03:06:55 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:29.483 03:06:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.483 03:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.483 ************************************ 00:07:29.483 START TEST accel_comp 00:07:29.483 ************************************ 00:07:29.483 03:06:55 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:29.483 03:06:55 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:29.483 [2024-07-23 03:06:55.900652] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:29.483 [2024-07-23 03:06:55.900727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320046 ] 00:07:29.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.483 [2024-07-23 03:06:55.964639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.483 [2024-07-23 03:06:56.058466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.742 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.743 03:06:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:31.118 03:06:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.118 00:07:31.118 real 0m1.413s 00:07:31.118 user 0m1.269s 00:07:31.118 sys 0m0.146s 00:07:31.118 03:06:57 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.118 03:06:57 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:31.118 ************************************ 00:07:31.118 END TEST accel_comp 00:07:31.118 ************************************ 00:07:31.118 03:06:57 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:31.118 03:06:57 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:31.118 03:06:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.118 03:06:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.118 ************************************ 00:07:31.118 START TEST accel_decomp 00:07:31.118 ************************************ 00:07:31.118 03:06:57 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:31.118 [2024-07-23 03:06:57.360919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:31.118 [2024-07-23 03:06:57.360989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320239 ] 00:07:31.118 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.118 [2024-07-23 03:06:57.427104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.118 [2024-07-23 03:06:57.519444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.118 03:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.493 03:06:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.493 00:07:32.493 real 0m1.418s 00:07:32.493 user 0m1.270s 00:07:32.493 sys 0m0.152s 00:07:32.493 03:06:58 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.493 03:06:58 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:32.493 ************************************ 00:07:32.493 END TEST accel_decomp 00:07:32.493 ************************************ 00:07:32.493 03:06:58 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.493 03:06:58 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:32.493 03:06:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.493 03:06:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.493 ************************************ 00:07:32.493 START TEST accel_decmop_full 00:07:32.493 ************************************ 00:07:32.493 03:06:58 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:32.493 03:06:58 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:32.493 [2024-07-23 03:06:58.823984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:32.493 [2024-07-23 03:06:58.824044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320475 ] 00:07:32.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.493 [2024-07-23 03:06:58.885570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.493 [2024-07-23 03:06:58.975836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.494 03:06:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.868 03:07:00 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.868 00:07:33.868 real 0m1.400s 00:07:33.868 user 0m1.266s 00:07:33.868 sys 0m0.137s 00:07:33.868 03:07:00 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.868 03:07:00 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:33.868 ************************************ 00:07:33.868 END TEST accel_decmop_full 00:07:33.868 ************************************ 00:07:33.868 03:07:00 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.868 03:07:00 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:33.868 03:07:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.868 03:07:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.868 ************************************ 00:07:33.868 START TEST accel_decomp_mcore 00:07:33.868 ************************************ 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:33.868 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:33.868 [2024-07-23 03:07:00.268370] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:33.868 [2024-07-23 03:07:00.268433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320640 ] 00:07:33.868 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.868 [2024-07-23 03:07:00.332690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.868 [2024-07-23 03:07:00.428738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.868 [2024-07-23 03:07:00.428794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.868 [2024-07-23 03:07:00.428911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.868 [2024-07-23 03:07:00.428913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.127 03:07:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.501 00:07:35.501 real 0m1.420s 00:07:35.501 user 0m4.715s 00:07:35.501 sys 0m0.164s 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.501 03:07:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:35.502 ************************************ 00:07:35.502 END TEST accel_decomp_mcore 00:07:35.502 ************************************ 00:07:35.502 03:07:01 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.502 03:07:01 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:35.502 03:07:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.502 03:07:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.502 ************************************ 00:07:35.502 START TEST accel_decomp_full_mcore 00:07:35.502 ************************************ 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:35.502 [2024-07-23 03:07:01.733883] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:35.502 [2024-07-23 03:07:01.733946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320794 ] 00:07:35.502 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.502 [2024-07-23 03:07:01.795970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.502 [2024-07-23 03:07:01.892610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.502 [2024-07-23 03:07:01.892689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.502 [2024-07-23 03:07:01.892779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.502 [2024-07-23 03:07:01.892781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.502 03:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.874 00:07:36.874 real 0m1.422s 00:07:36.874 user 0m4.739s 00:07:36.874 sys 0m0.158s 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.874 03:07:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:36.874 ************************************ 00:07:36.874 END TEST accel_decomp_full_mcore 00:07:36.874 ************************************ 00:07:36.874 03:07:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.874 03:07:03 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:36.874 03:07:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.874 03:07:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.874 ************************************ 00:07:36.874 START TEST accel_decomp_mthread 00:07:36.874 ************************************ 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:36.874 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:36.874 [2024-07-23 03:07:03.199803] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:36.875 [2024-07-23 03:07:03.199861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321071 ] 00:07:36.875 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.875 [2024-07-23 03:07:03.262159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.875 [2024-07-23 03:07:03.354815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.875 03:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.247 00:07:38.247 real 0m1.416s 00:07:38.247 user 0m1.269s 00:07:38.247 sys 0m0.150s 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.247 03:07:04 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:38.247 ************************************ 00:07:38.247 END TEST accel_decomp_mthread 00:07:38.247 ************************************ 00:07:38.247 03:07:04 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.247 03:07:04 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:38.247 03:07:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.247 03:07:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.247 ************************************ 00:07:38.247 START TEST accel_decomp_full_mthread 00:07:38.247 ************************************ 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:38.247 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:38.247 [2024-07-23 03:07:04.656059] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:38.247 [2024-07-23 03:07:04.656129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321228 ] 00:07:38.247 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.247 [2024-07-23 03:07:04.716240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.247 [2024-07-23 03:07:04.810266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 03:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.880 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.881 00:07:39.881 real 0m1.446s 00:07:39.881 user 0m1.295s 00:07:39.881 sys 0m0.154s 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.881 03:07:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:39.881 ************************************ 00:07:39.881 END TEST accel_decomp_full_mthread 00:07:39.881 ************************************ 00:07:39.881 03:07:06 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:39.881 03:07:06 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.881 03:07:06 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:39.881 03:07:06 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:39.881 03:07:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.881 03:07:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.881 03:07:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.881 03:07:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.881 03:07:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.881 03:07:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.881 03:07:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.881 03:07:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:39.881 03:07:06 accel -- accel/accel.sh@41 -- # jq -r . 00:07:39.881 ************************************ 00:07:39.881 START TEST accel_dif_functional_tests 00:07:39.881 ************************************ 00:07:39.881 03:07:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.881 [2024-07-23 03:07:06.168080] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:39.881 [2024-07-23 03:07:06.168151] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321392 ] 00:07:39.881 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.881 [2024-07-23 03:07:06.230224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.881 [2024-07-23 03:07:06.324412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.881 [2024-07-23 03:07:06.324479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.881 [2024-07-23 03:07:06.324482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.881 00:07:39.881 00:07:39.881 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.881 http://cunit.sourceforge.net/ 00:07:39.881 00:07:39.881 00:07:39.881 Suite: accel_dif 00:07:39.881 Test: verify: DIF generated, GUARD check ...passed 00:07:39.881 Test: verify: DIF generated, APPTAG check ...passed 00:07:39.881 Test: verify: DIF generated, REFTAG check ...passed 00:07:39.881 Test: verify: DIF not generated, GUARD check ...[2024-07-23 03:07:06.417617] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.881 passed 00:07:39.881 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 03:07:06.417707] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.881 passed 00:07:39.881 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 03:07:06.417739] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.881 passed 00:07:39.881 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:39.881 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 03:07:06.417800] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:39.881 passed 00:07:39.881 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:39.881 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:39.881 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:39.881 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 03:07:06.417942] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:39.881 passed 00:07:39.881 Test: verify copy: DIF generated, GUARD check ...passed 00:07:39.881 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:39.881 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:39.881 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 03:07:06.418093] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.881 passed 00:07:39.881 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 03:07:06.418128] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.881 passed 00:07:39.881 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 03:07:06.418160] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.881 passed 00:07:39.881 Test: generate copy: DIF generated, GUARD check ...passed 00:07:39.881 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:39.881 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:39.881 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:39.881 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:39.881 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:39.881 Test: generate copy: iovecs-len validate ...[2024-07-23 03:07:06.418374] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:39.881 passed 00:07:39.881 Test: generate copy: buffer alignment validate ...passed 00:07:39.881 00:07:39.881 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.881 suites 1 1 n/a 0 0 00:07:39.881 tests 26 26 26 0 0 00:07:39.881 asserts 115 115 115 0 n/a 00:07:39.881 00:07:39.881 Elapsed time = 0.002 seconds 00:07:40.140 00:07:40.140 real 0m0.498s 00:07:40.140 user 0m0.757s 00:07:40.140 sys 0m0.179s 00:07:40.140 03:07:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.140 03:07:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:40.140 ************************************ 00:07:40.140 END TEST accel_dif_functional_tests 00:07:40.140 ************************************ 00:07:40.140 00:07:40.140 real 0m31.624s 00:07:40.140 user 0m35.005s 00:07:40.140 sys 0m4.594s 00:07:40.140 03:07:06 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.140 03:07:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.140 ************************************ 00:07:40.140 END TEST accel 00:07:40.140 ************************************ 00:07:40.140 03:07:06 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:40.140 03:07:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:40.140 03:07:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.140 03:07:06 -- common/autotest_common.sh@10 -- # set +x 00:07:40.140 ************************************ 00:07:40.140 START TEST accel_rpc 00:07:40.140 ************************************ 00:07:40.140 03:07:06 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:40.399 * Looking for test storage... 00:07:40.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:40.399 03:07:06 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:40.399 03:07:06 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=321568 00:07:40.399 03:07:06 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:40.399 03:07:06 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 321568 00:07:40.399 03:07:06 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 321568 ']' 00:07:40.399 03:07:06 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.399 03:07:06 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.399 03:07:06 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.399 03:07:06 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.399 03:07:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.399 [2024-07-23 03:07:06.802442] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:40.399 [2024-07-23 03:07:06.802521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321568 ] 00:07:40.399 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.399 [2024-07-23 03:07:06.859842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.399 [2024-07-23 03:07:06.948011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.658 03:07:07 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.658 03:07:07 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:40.658 03:07:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:40.658 03:07:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:40.658 03:07:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:40.658 03:07:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:40.658 03:07:07 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:40.658 03:07:07 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:40.658 03:07:07 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.658 03:07:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 ************************************ 00:07:40.658 START TEST accel_assign_opcode 00:07:40.658 ************************************ 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 [2024-07-23 03:07:07.040737] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 [2024-07-23 03:07:07.048734] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.658 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.916 software 00:07:40.916 00:07:40.916 real 0m0.299s 00:07:40.916 user 0m0.038s 00:07:40.916 sys 0m0.008s 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.916 03:07:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.916 ************************************ 00:07:40.916 END TEST accel_assign_opcode 00:07:40.916 ************************************ 00:07:40.916 03:07:07 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 321568 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 321568 ']' 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 321568 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 321568 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 321568' 00:07:40.916 killing process with pid 321568 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@965 -- # kill 321568 00:07:40.916 03:07:07 accel_rpc -- common/autotest_common.sh@970 -- # wait 321568 00:07:41.482 00:07:41.482 real 0m1.091s 00:07:41.482 user 0m1.005s 00:07:41.482 sys 0m0.449s 00:07:41.482 03:07:07 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.482 03:07:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.482 ************************************ 00:07:41.482 END TEST accel_rpc 00:07:41.482 ************************************ 00:07:41.482 03:07:07 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:41.482 03:07:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:41.482 03:07:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.482 03:07:07 -- common/autotest_common.sh@10 -- # set +x 00:07:41.482 ************************************ 00:07:41.482 START TEST app_cmdline 00:07:41.482 ************************************ 00:07:41.482 03:07:07 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:41.482 * Looking for test storage... 00:07:41.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.482 03:07:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.482 03:07:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=321772 00:07:41.482 03:07:07 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.482 03:07:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 321772 00:07:41.482 03:07:07 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 321772 ']' 00:07:41.482 03:07:07 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.482 03:07:07 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:41.482 03:07:07 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.482 03:07:07 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:41.482 03:07:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.482 [2024-07-23 03:07:07.942375] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:41.482 [2024-07-23 03:07:07.942450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321772 ] 00:07:41.482 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.482 [2024-07-23 03:07:08.000945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.741 [2024-07-23 03:07:08.090395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.999 03:07:08 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:41.999 03:07:08 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:41.999 03:07:08 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:41.999 { 00:07:41.999 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:41.999 "fields": { 00:07:41.999 "major": 24, 00:07:41.999 "minor": 5, 00:07:41.999 "patch": 1, 00:07:41.999 "suffix": "-pre", 00:07:41.999 "commit": "5fa2f5086" 00:07:41.999 } 00:07:41.999 } 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:42.255 03:07:08 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.255 03:07:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:42.255 03:07:08 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:42.255 03:07:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.255 03:07:08 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:42.256 03:07:08 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.526 request: 00:07:42.526 { 00:07:42.526 "method": "env_dpdk_get_mem_stats", 00:07:42.526 "req_id": 1 00:07:42.526 } 00:07:42.526 Got JSON-RPC error response 00:07:42.526 response: 00:07:42.526 { 00:07:42.526 "code": -32601, 00:07:42.526 "message": "Method not found" 00:07:42.526 } 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.526 03:07:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 321772 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 321772 ']' 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 321772 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 321772 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 321772' 00:07:42.526 killing process with pid 321772 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@965 -- # kill 321772 00:07:42.526 03:07:08 app_cmdline -- common/autotest_common.sh@970 -- # wait 321772 00:07:42.787 00:07:42.787 real 0m1.485s 00:07:42.787 user 0m1.818s 00:07:42.787 sys 0m0.457s 00:07:42.787 03:07:09 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.787 03:07:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:42.787 ************************************ 00:07:42.787 END TEST app_cmdline 00:07:42.787 ************************************ 00:07:42.787 03:07:09 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:42.787 03:07:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:42.787 03:07:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.787 03:07:09 -- common/autotest_common.sh@10 -- # set +x 00:07:43.044 ************************************ 00:07:43.044 START TEST version 00:07:43.044 ************************************ 00:07:43.044 03:07:09 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:43.044 * Looking for test storage... 00:07:43.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:43.044 03:07:09 version -- app/version.sh@17 -- # get_header_version major 00:07:43.044 03:07:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.044 03:07:09 version -- app/version.sh@14 -- # cut -f2 00:07:43.044 03:07:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.044 03:07:09 version -- app/version.sh@17 -- # major=24 00:07:43.044 03:07:09 version -- app/version.sh@18 -- # get_header_version minor 00:07:43.044 03:07:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.044 03:07:09 version -- app/version.sh@14 -- # cut -f2 00:07:43.044 03:07:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.044 03:07:09 version -- app/version.sh@18 -- # minor=5 00:07:43.044 03:07:09 version -- app/version.sh@19 -- # get_header_version patch 00:07:43.044 03:07:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.044 03:07:09 version -- app/version.sh@14 -- # cut -f2 00:07:43.044 03:07:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.044 03:07:09 version -- app/version.sh@19 -- # patch=1 00:07:43.044 03:07:09 version -- app/version.sh@20 -- # get_header_version suffix 00:07:43.044 03:07:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.044 03:07:09 version -- app/version.sh@14 -- # cut -f2 00:07:43.044 03:07:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.044 03:07:09 version -- app/version.sh@20 -- # suffix=-pre 00:07:43.045 03:07:09 version -- app/version.sh@22 -- # version=24.5 00:07:43.045 03:07:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:43.045 03:07:09 version -- app/version.sh@25 -- # version=24.5.1 00:07:43.045 03:07:09 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:43.045 03:07:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:43.045 03:07:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:43.045 03:07:09 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:43.045 03:07:09 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:43.045 00:07:43.045 real 0m0.106s 00:07:43.045 user 0m0.061s 00:07:43.045 sys 0m0.068s 00:07:43.045 03:07:09 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.045 03:07:09 version -- common/autotest_common.sh@10 -- # set +x 00:07:43.045 ************************************ 00:07:43.045 END TEST version 00:07:43.045 ************************************ 00:07:43.045 03:07:09 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:43.045 03:07:09 -- spdk/autotest.sh@198 -- # uname -s 00:07:43.045 03:07:09 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:43.045 03:07:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:43.045 03:07:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:43.045 03:07:09 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:43.045 03:07:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:43.045 03:07:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:43.045 03:07:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.045 03:07:09 -- common/autotest_common.sh@10 -- # set +x 00:07:43.045 03:07:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:43.045 03:07:09 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:43.045 03:07:09 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:43.045 03:07:09 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:43.045 03:07:09 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:43.045 03:07:09 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:43.045 03:07:09 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.045 03:07:09 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:43.045 03:07:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.045 03:07:09 -- common/autotest_common.sh@10 -- # set +x 00:07:43.045 ************************************ 00:07:43.045 START TEST nvmf_tcp 00:07:43.045 ************************************ 00:07:43.045 03:07:09 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.045 * Looking for test storage... 00:07:43.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.045 03:07:09 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.045 03:07:09 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.045 03:07:09 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.045 03:07:09 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.045 03:07:09 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.045 03:07:09 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.045 03:07:09 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:43.045 03:07:09 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:43.045 03:07:09 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:43.045 03:07:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:43.045 03:07:09 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:43.045 03:07:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:43.045 03:07:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.045 03:07:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.303 ************************************ 00:07:43.303 START TEST nvmf_example 00:07:43.303 ************************************ 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:43.303 * Looking for test storage... 00:07:43.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.303 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.304 03:07:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:45.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:45.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:45.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:45.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:07:45.204 00:07:45.204 --- 10.0.0.2 ping statistics --- 00:07:45.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.204 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:07:45.204 00:07:45.204 --- 10.0.0.1 ping statistics --- 00:07:45.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.204 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.204 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.205 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.205 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.205 03:07:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.205 03:07:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:45.205 03:07:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:45.205 03:07:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:45.205 03:07:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=323684 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 323684 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 323684 ']' 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:45.463 03:07:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.463 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:46.395 03:07:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:46.395 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.593 Initializing NVMe Controllers 00:07:58.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:58.593 Initialization complete. Launching workers. 00:07:58.593 ======================================================== 00:07:58.593 Latency(us) 00:07:58.593 Device Information : IOPS MiB/s Average min max 00:07:58.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14921.24 58.29 4289.89 893.88 16063.16 00:07:58.593 ======================================================== 00:07:58.593 Total : 14921.24 58.29 4289.89 893.88 16063.16 00:07:58.593 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.593 rmmod nvme_tcp 00:07:58.593 rmmod nvme_fabrics 00:07:58.593 rmmod nvme_keyring 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 323684 ']' 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 323684 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 323684 ']' 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 323684 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 323684 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 323684' 00:07:58.593 killing process with pid 323684 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 323684 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 323684 00:07:58.593 nvmf threads initialize successfully 00:07:58.593 bdev subsystem init successfully 00:07:58.593 created a nvmf target service 00:07:58.593 create targets's poll groups done 00:07:58.593 all subsystems of target started 00:07:58.593 nvmf target is running 00:07:58.593 all subsystems of target stopped 00:07:58.593 destroy targets's poll groups done 00:07:58.593 destroyed the nvmf target service 00:07:58.593 bdev subsystem finish successfully 00:07:58.593 nvmf threads destroy successfully 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.593 03:07:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.164 03:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.164 03:07:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:59.164 03:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.164 03:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:59.164 00:07:59.164 real 0m15.882s 00:07:59.164 user 0m45.300s 00:07:59.164 sys 0m3.260s 00:07:59.164 03:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.164 03:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:59.164 ************************************ 00:07:59.164 END TEST nvmf_example 00:07:59.164 ************************************ 00:07:59.164 03:07:25 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:59.164 03:07:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:59.164 03:07:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.164 03:07:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.164 ************************************ 00:07:59.164 START TEST nvmf_filesystem 00:07:59.164 ************************************ 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:59.164 * Looking for test storage... 00:07:59.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:59.164 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:59.165 #define SPDK_CONFIG_H 00:07:59.165 #define SPDK_CONFIG_APPS 1 00:07:59.165 #define SPDK_CONFIG_ARCH native 00:07:59.165 #undef SPDK_CONFIG_ASAN 00:07:59.165 #undef SPDK_CONFIG_AVAHI 00:07:59.165 #undef SPDK_CONFIG_CET 00:07:59.165 #define SPDK_CONFIG_COVERAGE 1 00:07:59.165 #define SPDK_CONFIG_CROSS_PREFIX 00:07:59.165 #undef SPDK_CONFIG_CRYPTO 00:07:59.165 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:59.165 #undef SPDK_CONFIG_CUSTOMOCF 00:07:59.165 #undef SPDK_CONFIG_DAOS 00:07:59.165 #define SPDK_CONFIG_DAOS_DIR 00:07:59.165 #define SPDK_CONFIG_DEBUG 1 00:07:59.165 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:59.165 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:59.165 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:59.165 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:59.165 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:59.165 #undef SPDK_CONFIG_DPDK_UADK 00:07:59.165 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:59.165 #define SPDK_CONFIG_EXAMPLES 1 00:07:59.165 #undef SPDK_CONFIG_FC 00:07:59.165 #define SPDK_CONFIG_FC_PATH 00:07:59.165 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:59.165 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:59.165 #undef SPDK_CONFIG_FUSE 00:07:59.165 #undef SPDK_CONFIG_FUZZER 00:07:59.165 #define SPDK_CONFIG_FUZZER_LIB 00:07:59.165 #undef SPDK_CONFIG_GOLANG 00:07:59.165 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:59.165 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:59.165 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:59.165 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:59.165 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:59.165 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:59.165 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:59.165 #define SPDK_CONFIG_IDXD 1 00:07:59.165 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:59.165 #undef SPDK_CONFIG_IPSEC_MB 00:07:59.165 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:59.165 #define SPDK_CONFIG_ISAL 1 00:07:59.165 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:59.165 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:59.165 #define SPDK_CONFIG_LIBDIR 00:07:59.165 #undef SPDK_CONFIG_LTO 00:07:59.165 #define SPDK_CONFIG_MAX_LCORES 00:07:59.165 #define SPDK_CONFIG_NVME_CUSE 1 00:07:59.165 #undef SPDK_CONFIG_OCF 00:07:59.165 #define SPDK_CONFIG_OCF_PATH 00:07:59.165 #define SPDK_CONFIG_OPENSSL_PATH 00:07:59.165 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:59.165 #define SPDK_CONFIG_PGO_DIR 00:07:59.165 #undef SPDK_CONFIG_PGO_USE 00:07:59.165 #define SPDK_CONFIG_PREFIX /usr/local 00:07:59.165 #undef SPDK_CONFIG_RAID5F 00:07:59.165 #undef SPDK_CONFIG_RBD 00:07:59.165 #define SPDK_CONFIG_RDMA 1 00:07:59.165 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:59.165 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:59.165 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:59.165 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:59.165 #define SPDK_CONFIG_SHARED 1 00:07:59.165 #undef SPDK_CONFIG_SMA 00:07:59.165 #define SPDK_CONFIG_TESTS 1 00:07:59.165 #undef SPDK_CONFIG_TSAN 00:07:59.165 #define SPDK_CONFIG_UBLK 1 00:07:59.165 #define SPDK_CONFIG_UBSAN 1 00:07:59.165 #undef SPDK_CONFIG_UNIT_TESTS 00:07:59.165 #undef SPDK_CONFIG_URING 00:07:59.165 #define SPDK_CONFIG_URING_PATH 00:07:59.165 #undef SPDK_CONFIG_URING_ZNS 00:07:59.165 #undef SPDK_CONFIG_USDT 00:07:59.165 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:59.165 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:59.165 #define SPDK_CONFIG_VFIO_USER 1 00:07:59.165 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:59.165 #define SPDK_CONFIG_VHOST 1 00:07:59.165 #define SPDK_CONFIG_VIRTIO 1 00:07:59.165 #undef SPDK_CONFIG_VTUNE 00:07:59.165 #define SPDK_CONFIG_VTUNE_DIR 00:07:59.165 #define SPDK_CONFIG_WERROR 1 00:07:59.165 #define SPDK_CONFIG_WPDK_DIR 00:07:59.165 #undef SPDK_CONFIG_XNVME 00:07:59.165 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:59.165 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:59.166 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 325394 ]] 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 325394 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:59.167 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.dix18h 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.dix18h/tests/target /tmp/spdk.dix18h 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52966252544 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994708992 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9028456448 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941716480 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997352448 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996332544 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1024000 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:59.168 * Looking for test storage... 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52966252544 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11243048960 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.168 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.169 03:07:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:01.702 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:01.702 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:01.702 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:01.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.702 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:08:01.703 00:08:01.703 --- 10.0.0.2 ping statistics --- 00:08:01.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.703 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:08:01.703 00:08:01.703 --- 10.0.0.1 ping statistics --- 00:08:01.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.703 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.703 ************************************ 00:08:01.703 START TEST nvmf_filesystem_no_in_capsule 00:08:01.703 ************************************ 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:01.703 03:07:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=327023 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 327023 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 327023 ']' 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.703 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.703 [2024-07-23 03:07:28.050845] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:01.703 [2024-07-23 03:07:28.050933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.703 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.703 [2024-07-23 03:07:28.122139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.703 [2024-07-23 03:07:28.216116] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.703 [2024-07-23 03:07:28.216179] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.703 [2024-07-23 03:07:28.216206] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.703 [2024-07-23 03:07:28.216220] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.703 [2024-07-23 03:07:28.216233] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.703 [2024-07-23 03:07:28.216340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.703 [2024-07-23 03:07:28.216418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.703 [2024-07-23 03:07:28.216501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.703 [2024-07-23 03:07:28.216503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.961 [2024-07-23 03:07:28.378434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.961 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.219 Malloc1 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.219 [2024-07-23 03:07:28.566982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:02.219 { 00:08:02.219 "name": "Malloc1", 00:08:02.219 "aliases": [ 00:08:02.219 "a667bf0e-5e7d-42fa-8ca9-7e32844024a5" 00:08:02.219 ], 00:08:02.219 "product_name": "Malloc disk", 00:08:02.219 "block_size": 512, 00:08:02.219 "num_blocks": 1048576, 00:08:02.219 "uuid": "a667bf0e-5e7d-42fa-8ca9-7e32844024a5", 00:08:02.219 "assigned_rate_limits": { 00:08:02.219 "rw_ios_per_sec": 0, 00:08:02.219 "rw_mbytes_per_sec": 0, 00:08:02.219 "r_mbytes_per_sec": 0, 00:08:02.219 "w_mbytes_per_sec": 0 00:08:02.219 }, 00:08:02.219 "claimed": true, 00:08:02.219 "claim_type": "exclusive_write", 00:08:02.219 "zoned": false, 00:08:02.219 "supported_io_types": { 00:08:02.219 "read": true, 00:08:02.219 "write": true, 00:08:02.219 "unmap": true, 00:08:02.219 "write_zeroes": true, 00:08:02.219 "flush": true, 00:08:02.219 "reset": true, 00:08:02.219 "compare": false, 00:08:02.219 "compare_and_write": false, 00:08:02.219 "abort": true, 00:08:02.219 "nvme_admin": false, 00:08:02.219 "nvme_io": false 00:08:02.219 }, 00:08:02.219 "memory_domains": [ 00:08:02.219 { 00:08:02.219 "dma_device_id": "system", 00:08:02.219 "dma_device_type": 1 00:08:02.219 }, 00:08:02.219 { 00:08:02.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.219 "dma_device_type": 2 00:08:02.219 } 00:08:02.219 ], 00:08:02.219 "driver_specific": {} 00:08:02.219 } 00:08:02.219 ]' 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:02.219 03:07:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.785 03:07:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.785 03:07:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:02.785 03:07:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.785 03:07:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:02.785 03:07:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:05.308 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:05.565 03:07:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.498 ************************************ 00:08:06.498 START TEST filesystem_ext4 00:08:06.498 ************************************ 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:06.498 03:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:06.498 mke2fs 1.46.5 (30-Dec-2021) 00:08:06.498 Discarding device blocks: 0/522240 done 00:08:06.498 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:06.498 Filesystem UUID: b4bb622f-fe00-4abb-aea4-19b429b560c4 00:08:06.498 Superblock backups stored on blocks: 00:08:06.498 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:06.498 00:08:06.498 Allocating group tables: 0/64 done 00:08:06.498 Writing inode tables: 0/64 done 00:08:06.756 Creating journal (8192 blocks): done 00:08:07.833 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:08:07.833 00:08:07.833 03:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:07.833 03:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 327023 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.772 00:08:08.772 real 0m2.266s 00:08:08.772 user 0m0.012s 00:08:08.772 sys 0m0.058s 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:08.772 ************************************ 00:08:08.772 END TEST filesystem_ext4 00:08:08.772 ************************************ 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.772 ************************************ 00:08:08.772 START TEST filesystem_btrfs 00:08:08.772 ************************************ 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:08.772 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:09.072 btrfs-progs v6.6.2 00:08:09.072 See https://btrfs.readthedocs.io for more information. 00:08:09.072 00:08:09.072 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:09.072 NOTE: several default settings have changed in version 5.15, please make sure 00:08:09.072 this does not affect your deployments: 00:08:09.072 - DUP for metadata (-m dup) 00:08:09.072 - enabled no-holes (-O no-holes) 00:08:09.072 - enabled free-space-tree (-R free-space-tree) 00:08:09.072 00:08:09.072 Label: (null) 00:08:09.072 UUID: a99dbc57-b7c4-4ca2-9331-9f047f67c352 00:08:09.072 Node size: 16384 00:08:09.072 Sector size: 4096 00:08:09.072 Filesystem size: 510.00MiB 00:08:09.072 Block group profiles: 00:08:09.072 Data: single 8.00MiB 00:08:09.072 Metadata: DUP 32.00MiB 00:08:09.072 System: DUP 8.00MiB 00:08:09.072 SSD detected: yes 00:08:09.072 Zoned device: no 00:08:09.072 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:09.072 Runtime features: free-space-tree 00:08:09.072 Checksum: crc32c 00:08:09.072 Number of devices: 1 00:08:09.072 Devices: 00:08:09.072 ID SIZE PATH 00:08:09.072 1 510.00MiB /dev/nvme0n1p1 00:08:09.072 00:08:09.072 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:09.072 03:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 327023 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.004 00:08:10.004 real 0m1.250s 00:08:10.004 user 0m0.019s 00:08:10.004 sys 0m0.114s 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:10.004 ************************************ 00:08:10.004 END TEST filesystem_btrfs 00:08:10.004 ************************************ 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.004 ************************************ 00:08:10.004 START TEST filesystem_xfs 00:08:10.004 ************************************ 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:10.004 03:07:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:10.262 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:10.262 = sectsz=512 attr=2, projid32bit=1 00:08:10.262 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:10.262 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:10.262 data = bsize=4096 blocks=130560, imaxpct=25 00:08:10.262 = sunit=0 swidth=0 blks 00:08:10.262 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:10.262 log =internal log bsize=4096 blocks=16384, version=2 00:08:10.262 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:10.262 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:11.195 Discarding blocks...Done. 00:08:11.195 03:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:11.195 03:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 327023 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.722 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.723 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.723 00:08:13.723 real 0m3.619s 00:08:13.723 user 0m0.019s 00:08:13.723 sys 0m0.062s 00:08:13.723 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.723 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:13.723 ************************************ 00:08:13.723 END TEST filesystem_xfs 00:08:13.723 ************************************ 00:08:13.723 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.723 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:13.723 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 327023 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 327023 ']' 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 327023 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 327023 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 327023' 00:08:13.981 killing process with pid 327023 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 327023 00:08:13.981 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 327023 00:08:14.240 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:14.240 00:08:14.240 real 0m12.794s 00:08:14.240 user 0m49.177s 00:08:14.240 sys 0m1.807s 00:08:14.240 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.240 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.240 ************************************ 00:08:14.240 END TEST nvmf_filesystem_no_in_capsule 00:08:14.240 ************************************ 00:08:14.499 03:07:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:14.499 03:07:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:14.499 03:07:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.499 03:07:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.499 ************************************ 00:08:14.499 START TEST nvmf_filesystem_in_capsule 00:08:14.499 ************************************ 00:08:14.499 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:14.499 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:14.499 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:14.499 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=328833 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 328833 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 328833 ']' 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:14.500 03:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.500 [2024-07-23 03:07:40.897667] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:14.500 [2024-07-23 03:07:40.897767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.500 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.500 [2024-07-23 03:07:40.965795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.500 [2024-07-23 03:07:41.055090] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.500 [2024-07-23 03:07:41.055155] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.500 [2024-07-23 03:07:41.055181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.500 [2024-07-23 03:07:41.055195] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.500 [2024-07-23 03:07:41.055207] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.500 [2024-07-23 03:07:41.055300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.500 [2024-07-23 03:07:41.055355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.500 [2024-07-23 03:07:41.055469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.500 [2024-07-23 03:07:41.055471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.758 [2024-07-23 03:07:41.207437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.758 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.017 Malloc1 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.017 [2024-07-23 03:07:41.393850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.017 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:15.017 { 00:08:15.017 "name": "Malloc1", 00:08:15.017 "aliases": [ 00:08:15.017 "c5794e43-3b85-43ff-8560-f5805183ad79" 00:08:15.017 ], 00:08:15.017 "product_name": "Malloc disk", 00:08:15.017 "block_size": 512, 00:08:15.017 "num_blocks": 1048576, 00:08:15.017 "uuid": "c5794e43-3b85-43ff-8560-f5805183ad79", 00:08:15.017 "assigned_rate_limits": { 00:08:15.017 "rw_ios_per_sec": 0, 00:08:15.017 "rw_mbytes_per_sec": 0, 00:08:15.017 "r_mbytes_per_sec": 0, 00:08:15.017 "w_mbytes_per_sec": 0 00:08:15.017 }, 00:08:15.017 "claimed": true, 00:08:15.017 "claim_type": "exclusive_write", 00:08:15.017 "zoned": false, 00:08:15.017 "supported_io_types": { 00:08:15.017 "read": true, 00:08:15.017 "write": true, 00:08:15.017 "unmap": true, 00:08:15.017 "write_zeroes": true, 00:08:15.017 "flush": true, 00:08:15.017 "reset": true, 00:08:15.017 "compare": false, 00:08:15.017 "compare_and_write": false, 00:08:15.017 "abort": true, 00:08:15.017 "nvme_admin": false, 00:08:15.017 "nvme_io": false 00:08:15.017 }, 00:08:15.017 "memory_domains": [ 00:08:15.017 { 00:08:15.017 "dma_device_id": "system", 00:08:15.017 "dma_device_type": 1 00:08:15.017 }, 00:08:15.017 { 00:08:15.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.017 "dma_device_type": 2 00:08:15.017 } 00:08:15.017 ], 00:08:15.017 "driver_specific": {} 00:08:15.017 } 00:08:15.017 ]' 00:08:15.018 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:15.018 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:15.018 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:15.018 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:15.018 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:15.018 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:15.018 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:15.018 03:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.584 03:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.584 03:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:15.584 03:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.584 03:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:15.584 03:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:18.109 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:18.675 03:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:19.608 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:19.608 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.609 ************************************ 00:08:19.609 START TEST filesystem_in_capsule_ext4 00:08:19.609 ************************************ 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:19.609 03:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:19.609 mke2fs 1.46.5 (30-Dec-2021) 00:08:19.609 Discarding device blocks: 0/522240 done 00:08:19.609 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:19.609 Filesystem UUID: b618b5de-ae72-4976-bf4e-5de0aa6e7417 00:08:19.609 Superblock backups stored on blocks: 00:08:19.609 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:19.609 00:08:19.609 Allocating group tables: 0/64 done 00:08:19.609 Writing inode tables: 0/64 done 00:08:20.174 Creating journal (8192 blocks): done 00:08:20.174 Writing superblocks and filesystem accounting information: 0/64 done 00:08:20.174 00:08:20.174 03:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:20.174 03:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 328833 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.107 00:08:21.107 real 0m1.518s 00:08:21.107 user 0m0.010s 00:08:21.107 sys 0m0.056s 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:21.107 ************************************ 00:08:21.107 END TEST filesystem_in_capsule_ext4 00:08:21.107 ************************************ 00:08:21.107 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.108 ************************************ 00:08:21.108 START TEST filesystem_in_capsule_btrfs 00:08:21.108 ************************************ 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:21.108 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:21.365 btrfs-progs v6.6.2 00:08:21.365 See https://btrfs.readthedocs.io for more information. 00:08:21.365 00:08:21.365 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:21.365 NOTE: several default settings have changed in version 5.15, please make sure 00:08:21.365 this does not affect your deployments: 00:08:21.365 - DUP for metadata (-m dup) 00:08:21.365 - enabled no-holes (-O no-holes) 00:08:21.365 - enabled free-space-tree (-R free-space-tree) 00:08:21.365 00:08:21.365 Label: (null) 00:08:21.365 UUID: 28287687-e2c2-4981-bc95-9dd8f73e8d15 00:08:21.365 Node size: 16384 00:08:21.365 Sector size: 4096 00:08:21.366 Filesystem size: 510.00MiB 00:08:21.366 Block group profiles: 00:08:21.366 Data: single 8.00MiB 00:08:21.366 Metadata: DUP 32.00MiB 00:08:21.366 System: DUP 8.00MiB 00:08:21.366 SSD detected: yes 00:08:21.366 Zoned device: no 00:08:21.366 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:21.366 Runtime features: free-space-tree 00:08:21.366 Checksum: crc32c 00:08:21.366 Number of devices: 1 00:08:21.366 Devices: 00:08:21.366 ID SIZE PATH 00:08:21.366 1 510.00MiB /dev/nvme0n1p1 00:08:21.366 00:08:21.366 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:21.366 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 328833 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.624 00:08:21.624 real 0m0.469s 00:08:21.624 user 0m0.016s 00:08:21.624 sys 0m0.120s 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.624 03:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:21.624 ************************************ 00:08:21.624 END TEST filesystem_in_capsule_btrfs 00:08:21.624 ************************************ 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.624 ************************************ 00:08:21.624 START TEST filesystem_in_capsule_xfs 00:08:21.624 ************************************ 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:21.624 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:21.624 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:21.624 = sectsz=512 attr=2, projid32bit=1 00:08:21.624 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:21.624 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:21.624 data = bsize=4096 blocks=130560, imaxpct=25 00:08:21.624 = sunit=0 swidth=0 blks 00:08:21.624 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:21.624 log =internal log bsize=4096 blocks=16384, version=2 00:08:21.624 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:21.624 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:22.559 Discarding blocks...Done. 00:08:22.559 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:22.559 03:07:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 328833 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.086 00:08:25.086 real 0m3.246s 00:08:25.086 user 0m0.017s 00:08:25.086 sys 0m0.061s 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:25.086 ************************************ 00:08:25.086 END TEST filesystem_in_capsule_xfs 00:08:25.086 ************************************ 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:25.086 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:25.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 328833 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 328833 ']' 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 328833 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 328833 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 328833' 00:08:25.344 killing process with pid 328833 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 328833 00:08:25.344 03:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 328833 00:08:25.911 03:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:25.911 00:08:25.911 real 0m11.390s 00:08:25.911 user 0m43.575s 00:08:25.911 sys 0m1.792s 00:08:25.911 03:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.911 03:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.911 ************************************ 00:08:25.911 END TEST nvmf_filesystem_in_capsule 00:08:25.911 ************************************ 00:08:25.911 03:07:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.912 rmmod nvme_tcp 00:08:25.912 rmmod nvme_fabrics 00:08:25.912 rmmod nvme_keyring 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.912 03:07:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.816 03:07:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.816 00:08:27.816 real 0m28.816s 00:08:27.816 user 1m33.763s 00:08:27.816 sys 0m5.223s 00:08:27.816 03:07:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.816 03:07:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.816 ************************************ 00:08:27.816 END TEST nvmf_filesystem 00:08:27.816 ************************************ 00:08:27.816 03:07:54 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:27.816 03:07:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:27.816 03:07:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.816 03:07:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.074 ************************************ 00:08:28.074 START TEST nvmf_target_discovery 00:08:28.074 ************************************ 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:28.074 * Looking for test storage... 00:08:28.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.074 03:07:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.017 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.018 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.018 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.018 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.018 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:08:30.018 00:08:30.018 --- 10.0.0.2 ping statistics --- 00:08:30.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.018 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:08:30.018 00:08:30.018 --- 10.0.0.1 ping statistics --- 00:08:30.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.018 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=332188 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 332188 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 332188 ']' 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:30.018 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.277 [2024-07-23 03:07:56.620187] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:30.277 [2024-07-23 03:07:56.620265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.277 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.277 [2024-07-23 03:07:56.695196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.277 [2024-07-23 03:07:56.790842] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.277 [2024-07-23 03:07:56.790911] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.277 [2024-07-23 03:07:56.790927] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.277 [2024-07-23 03:07:56.790941] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.277 [2024-07-23 03:07:56.790953] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.277 [2024-07-23 03:07:56.791011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.277 [2024-07-23 03:07:56.791080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.277 [2024-07-23 03:07:56.791103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.277 [2024-07-23 03:07:56.791106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.535 [2024-07-23 03:07:56.941462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:30.535 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 Null1 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 [2024-07-23 03:07:56.981780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 Null2 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 Null3 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 Null4 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.536 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:30.794 00:08:30.794 Discovery Log Number of Records 6, Generation counter 6 00:08:30.794 =====Discovery Log Entry 0====== 00:08:30.794 trtype: tcp 00:08:30.794 adrfam: ipv4 00:08:30.794 subtype: current discovery subsystem 00:08:30.794 treq: not required 00:08:30.794 portid: 0 00:08:30.794 trsvcid: 4420 00:08:30.794 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:30.794 traddr: 10.0.0.2 00:08:30.794 eflags: explicit discovery connections, duplicate discovery information 00:08:30.794 sectype: none 00:08:30.794 =====Discovery Log Entry 1====== 00:08:30.794 trtype: tcp 00:08:30.794 adrfam: ipv4 00:08:30.794 subtype: nvme subsystem 00:08:30.794 treq: not required 00:08:30.794 portid: 0 00:08:30.794 trsvcid: 4420 00:08:30.794 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:30.794 traddr: 10.0.0.2 00:08:30.794 eflags: none 00:08:30.794 sectype: none 00:08:30.794 =====Discovery Log Entry 2====== 00:08:30.794 trtype: tcp 00:08:30.794 adrfam: ipv4 00:08:30.794 subtype: nvme subsystem 00:08:30.794 treq: not required 00:08:30.794 portid: 0 00:08:30.794 trsvcid: 4420 00:08:30.794 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:30.794 traddr: 10.0.0.2 00:08:30.794 eflags: none 00:08:30.794 sectype: none 00:08:30.794 =====Discovery Log Entry 3====== 00:08:30.794 trtype: tcp 00:08:30.794 adrfam: ipv4 00:08:30.794 subtype: nvme subsystem 00:08:30.794 treq: not required 00:08:30.794 portid: 0 00:08:30.794 trsvcid: 4420 00:08:30.794 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:30.794 traddr: 10.0.0.2 00:08:30.794 eflags: none 00:08:30.794 sectype: none 00:08:30.794 =====Discovery Log Entry 4====== 00:08:30.794 trtype: tcp 00:08:30.794 adrfam: ipv4 00:08:30.794 subtype: nvme subsystem 00:08:30.794 treq: not required 00:08:30.794 portid: 0 00:08:30.794 trsvcid: 4420 00:08:30.794 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:30.794 traddr: 10.0.0.2 00:08:30.794 eflags: none 00:08:30.794 sectype: none 00:08:30.795 =====Discovery Log Entry 5====== 00:08:30.795 trtype: tcp 00:08:30.795 adrfam: ipv4 00:08:30.795 subtype: discovery subsystem referral 00:08:30.795 treq: not required 00:08:30.795 portid: 0 00:08:30.795 trsvcid: 4430 00:08:30.795 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:30.795 traddr: 10.0.0.2 00:08:30.795 eflags: none 00:08:30.795 sectype: none 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:30.795 Perform nvmf subsystem discovery via RPC 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 [ 00:08:30.795 { 00:08:30.795 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:30.795 "subtype": "Discovery", 00:08:30.795 "listen_addresses": [ 00:08:30.795 { 00:08:30.795 "trtype": "TCP", 00:08:30.795 "adrfam": "IPv4", 00:08:30.795 "traddr": "10.0.0.2", 00:08:30.795 "trsvcid": "4420" 00:08:30.795 } 00:08:30.795 ], 00:08:30.795 "allow_any_host": true, 00:08:30.795 "hosts": [] 00:08:30.795 }, 00:08:30.795 { 00:08:30.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.795 "subtype": "NVMe", 00:08:30.795 "listen_addresses": [ 00:08:30.795 { 00:08:30.795 "trtype": "TCP", 00:08:30.795 "adrfam": "IPv4", 00:08:30.795 "traddr": "10.0.0.2", 00:08:30.795 "trsvcid": "4420" 00:08:30.795 } 00:08:30.795 ], 00:08:30.795 "allow_any_host": true, 00:08:30.795 "hosts": [], 00:08:30.795 "serial_number": "SPDK00000000000001", 00:08:30.795 "model_number": "SPDK bdev Controller", 00:08:30.795 "max_namespaces": 32, 00:08:30.795 "min_cntlid": 1, 00:08:30.795 "max_cntlid": 65519, 00:08:30.795 "namespaces": [ 00:08:30.795 { 00:08:30.795 "nsid": 1, 00:08:30.795 "bdev_name": "Null1", 00:08:30.795 "name": "Null1", 00:08:30.795 "nguid": "4EC3B08706BD4E4DB2C3D55A4645F594", 00:08:30.795 "uuid": "4ec3b087-06bd-4e4d-b2c3-d55a4645f594" 00:08:30.795 } 00:08:30.795 ] 00:08:30.795 }, 00:08:30.795 { 00:08:30.795 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:30.795 "subtype": "NVMe", 00:08:30.795 "listen_addresses": [ 00:08:30.795 { 00:08:30.795 "trtype": "TCP", 00:08:30.795 "adrfam": "IPv4", 00:08:30.795 "traddr": "10.0.0.2", 00:08:30.795 "trsvcid": "4420" 00:08:30.795 } 00:08:30.795 ], 00:08:30.795 "allow_any_host": true, 00:08:30.795 "hosts": [], 00:08:30.795 "serial_number": "SPDK00000000000002", 00:08:30.795 "model_number": "SPDK bdev Controller", 00:08:30.795 "max_namespaces": 32, 00:08:30.795 "min_cntlid": 1, 00:08:30.795 "max_cntlid": 65519, 00:08:30.795 "namespaces": [ 00:08:30.795 { 00:08:30.795 "nsid": 1, 00:08:30.795 "bdev_name": "Null2", 00:08:30.795 "name": "Null2", 00:08:30.795 "nguid": "EE631C89672D497DA94581AD730FCEB9", 00:08:30.795 "uuid": "ee631c89-672d-497d-a945-81ad730fceb9" 00:08:30.795 } 00:08:30.795 ] 00:08:30.795 }, 00:08:30.795 { 00:08:30.795 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:30.795 "subtype": "NVMe", 00:08:30.795 "listen_addresses": [ 00:08:30.795 { 00:08:30.795 "trtype": "TCP", 00:08:30.795 "adrfam": "IPv4", 00:08:30.795 "traddr": "10.0.0.2", 00:08:30.795 "trsvcid": "4420" 00:08:30.795 } 00:08:30.795 ], 00:08:30.795 "allow_any_host": true, 00:08:30.795 "hosts": [], 00:08:30.795 "serial_number": "SPDK00000000000003", 00:08:30.795 "model_number": "SPDK bdev Controller", 00:08:30.795 "max_namespaces": 32, 00:08:30.795 "min_cntlid": 1, 00:08:30.795 "max_cntlid": 65519, 00:08:30.795 "namespaces": [ 00:08:30.795 { 00:08:30.795 "nsid": 1, 00:08:30.795 "bdev_name": "Null3", 00:08:30.795 "name": "Null3", 00:08:30.795 "nguid": "EE5BE567AB0E46DB8FE588BBCC08E6C2", 00:08:30.795 "uuid": "ee5be567-ab0e-46db-8fe5-88bbcc08e6c2" 00:08:30.795 } 00:08:30.795 ] 00:08:30.795 }, 00:08:30.795 { 00:08:30.795 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:30.795 "subtype": "NVMe", 00:08:30.795 "listen_addresses": [ 00:08:30.795 { 00:08:30.795 "trtype": "TCP", 00:08:30.795 "adrfam": "IPv4", 00:08:30.795 "traddr": "10.0.0.2", 00:08:30.795 "trsvcid": "4420" 00:08:30.795 } 00:08:30.795 ], 00:08:30.795 "allow_any_host": true, 00:08:30.795 "hosts": [], 00:08:30.795 "serial_number": "SPDK00000000000004", 00:08:30.795 "model_number": "SPDK bdev Controller", 00:08:30.795 "max_namespaces": 32, 00:08:30.795 "min_cntlid": 1, 00:08:30.795 "max_cntlid": 65519, 00:08:30.795 "namespaces": [ 00:08:30.795 { 00:08:30.795 "nsid": 1, 00:08:30.795 "bdev_name": "Null4", 00:08:30.795 "name": "Null4", 00:08:30.795 "nguid": "BF544A0176234E98A6D721255450ADC6", 00:08:30.795 "uuid": "bf544a01-7623-4e98-a6d7-21255450adc6" 00:08:30.795 } 00:08:30.795 ] 00:08:30.795 } 00:08:30.795 ] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.795 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.796 rmmod nvme_tcp 00:08:30.796 rmmod nvme_fabrics 00:08:30.796 rmmod nvme_keyring 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:30.796 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 332188 ']' 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 332188 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 332188 ']' 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 332188 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 332188 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 332188' 00:08:31.056 killing process with pid 332188 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 332188 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 332188 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.056 03:07:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.596 03:07:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.596 00:08:33.596 real 0m5.258s 00:08:33.596 user 0m4.071s 00:08:33.596 sys 0m1.807s 00:08:33.596 03:07:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.596 03:07:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.596 ************************************ 00:08:33.596 END TEST nvmf_target_discovery 00:08:33.596 ************************************ 00:08:33.596 03:07:59 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:33.596 03:07:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:33.596 03:07:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.596 03:07:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.596 ************************************ 00:08:33.596 START TEST nvmf_referrals 00:08:33.596 ************************************ 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:33.596 * Looking for test storage... 00:08:33.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:33.596 03:07:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.597 03:07:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:35.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:35.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:35.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:35.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:08:35.501 00:08:35.501 --- 10.0.0.2 ping statistics --- 00:08:35.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.501 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:08:35.501 00:08:35.501 --- 10.0.0.1 ping statistics --- 00:08:35.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.501 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=334278 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 334278 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 334278 ']' 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:35.501 03:08:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.501 [2024-07-23 03:08:02.017359] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:35.501 [2024-07-23 03:08:02.017450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.501 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.759 [2024-07-23 03:08:02.084125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.759 [2024-07-23 03:08:02.172285] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.759 [2024-07-23 03:08:02.172350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.759 [2024-07-23 03:08:02.172364] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.759 [2024-07-23 03:08:02.172375] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.759 [2024-07-23 03:08:02.172384] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.759 [2024-07-23 03:08:02.172469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.759 [2024-07-23 03:08:02.172534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.759 [2024-07-23 03:08:02.172599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.759 [2024-07-23 03:08:02.172602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.759 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.759 [2024-07-23 03:08:02.328460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.017 [2024-07-23 03:08:02.340750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.017 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.018 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:36.275 03:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.532 03:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:36.532 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:36.532 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:36.532 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:36.532 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:36.532 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.532 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:36.790 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:36.791 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.791 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.048 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:37.049 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:37.049 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.049 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.049 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.049 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.049 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.307 rmmod nvme_tcp 00:08:37.307 rmmod nvme_fabrics 00:08:37.307 rmmod nvme_keyring 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 334278 ']' 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 334278 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 334278 ']' 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 334278 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 334278 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 334278' 00:08:37.307 killing process with pid 334278 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 334278 00:08:37.307 03:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 334278 00:08:37.567 03:08:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:37.567 03:08:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:37.567 03:08:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:37.567 03:08:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.567 03:08:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:37.567 03:08:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.567 03:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.567 03:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.102 03:08:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.102 00:08:40.102 real 0m6.380s 00:08:40.102 user 0m8.832s 00:08:40.102 sys 0m2.107s 00:08:40.102 03:08:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.102 03:08:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.102 ************************************ 00:08:40.102 END TEST nvmf_referrals 00:08:40.102 ************************************ 00:08:40.102 03:08:06 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:40.102 03:08:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:40.102 03:08:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.102 03:08:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.102 ************************************ 00:08:40.102 START TEST nvmf_connect_disconnect 00:08:40.102 ************************************ 00:08:40.102 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:40.102 * Looking for test storage... 00:08:40.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.102 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.102 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:40.102 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.103 03:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.005 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:42.006 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:42.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:42.006 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:42.006 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:08:42.006 00:08:42.006 --- 10.0.0.2 ping statistics --- 00:08:42.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.006 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:08:42.006 00:08:42.006 --- 10.0.0.1 ping statistics --- 00:08:42.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.006 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=336566 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 336566 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 336566 ']' 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.006 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.006 [2024-07-23 03:08:08.517698] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:42.006 [2024-07-23 03:08:08.517785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.006 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.265 [2024-07-23 03:08:08.587901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.265 [2024-07-23 03:08:08.677810] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.266 [2024-07-23 03:08:08.677883] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.266 [2024-07-23 03:08:08.677897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.266 [2024-07-23 03:08:08.677908] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.266 [2024-07-23 03:08:08.677917] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.266 [2024-07-23 03:08:08.678028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.266 [2024-07-23 03:08:08.678052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.266 [2024-07-23 03:08:08.678113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.266 [2024-07-23 03:08:08.678115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.266 [2024-07-23 03:08:08.827420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.266 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.524 [2024-07-23 03:08:08.884832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:42.524 03:08:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:45.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.598 rmmod nvme_tcp 00:12:33.598 rmmod nvme_fabrics 00:12:33.598 rmmod nvme_keyring 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 336566 ']' 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 336566 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 336566 ']' 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 336566 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 336566 00:12:33.598 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:33.599 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:33.599 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 336566' 00:12:33.599 killing process with pid 336566 00:12:33.599 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 336566 00:12:33.599 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 336566 00:12:33.857 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.857 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.858 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.858 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.858 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.858 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.858 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.858 03:12:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.389 03:12:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.389 00:12:36.389 real 3m56.252s 00:12:36.389 user 14m59.707s 00:12:36.389 sys 0m34.479s 00:12:36.389 03:12:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:36.389 03:12:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.389 ************************************ 00:12:36.389 END TEST nvmf_connect_disconnect 00:12:36.389 ************************************ 00:12:36.389 03:12:02 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:36.389 03:12:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:36.389 03:12:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:36.389 03:12:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.389 ************************************ 00:12:36.389 START TEST nvmf_multitarget 00:12:36.389 ************************************ 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:36.389 * Looking for test storage... 00:12:36.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.389 03:12:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.390 03:12:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:38.293 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:38.293 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:38.293 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:38.293 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.293 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:12:38.294 00:12:38.294 --- 10.0.0.2 ping statistics --- 00:12:38.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.294 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:12:38.294 00:12:38.294 --- 10.0.0.1 ping statistics --- 00:12:38.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.294 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=367766 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 367766 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 367766 ']' 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:38.294 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.294 [2024-07-23 03:12:04.669741] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:38.294 [2024-07-23 03:12:04.669838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.294 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.294 [2024-07-23 03:12:04.735519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.294 [2024-07-23 03:12:04.824463] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.294 [2024-07-23 03:12:04.824526] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.294 [2024-07-23 03:12:04.824554] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.294 [2024-07-23 03:12:04.824565] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.294 [2024-07-23 03:12:04.824576] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.294 [2024-07-23 03:12:04.824656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.294 [2024-07-23 03:12:04.824724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.294 [2024-07-23 03:12:04.824788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.294 [2024-07-23 03:12:04.824791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.552 03:12:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:38.552 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:38.552 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:38.809 "nvmf_tgt_1" 00:12:38.809 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:38.809 "nvmf_tgt_2" 00:12:38.809 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.809 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:39.066 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:39.066 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:39.066 true 00:12:39.066 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:39.066 true 00:12:39.066 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.066 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.324 rmmod nvme_tcp 00:12:39.324 rmmod nvme_fabrics 00:12:39.324 rmmod nvme_keyring 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 367766 ']' 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 367766 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 367766 ']' 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 367766 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 367766 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 367766' 00:12:39.324 killing process with pid 367766 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 367766 00:12:39.324 03:12:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 367766 00:12:39.581 03:12:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.581 03:12:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:39.581 03:12:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:39.581 03:12:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.581 03:12:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.581 03:12:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.581 03:12:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.581 03:12:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.115 03:12:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:42.115 00:12:42.115 real 0m5.644s 00:12:42.115 user 0m6.287s 00:12:42.115 sys 0m1.879s 00:12:42.115 03:12:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.115 03:12:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.115 ************************************ 00:12:42.115 END TEST nvmf_multitarget 00:12:42.115 ************************************ 00:12:42.115 03:12:08 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.115 03:12:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:42.115 03:12:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:42.115 03:12:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.115 ************************************ 00:12:42.116 START TEST nvmf_rpc 00:12:42.116 ************************************ 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.116 * Looking for test storage... 00:12:42.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:42.116 03:12:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:44.020 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:44.020 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:44.020 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:44.020 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.020 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:44.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:44.021 00:12:44.021 --- 10.0.0.2 ping statistics --- 00:12:44.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.021 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:12:44.021 00:12:44.021 --- 10.0.0.1 ping statistics --- 00:12:44.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.021 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=370367 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 370367 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 370367 ']' 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:44.021 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 [2024-07-23 03:12:10.375156] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:44.021 [2024-07-23 03:12:10.375236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.021 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.021 [2024-07-23 03:12:10.445954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.021 [2024-07-23 03:12:10.540387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.021 [2024-07-23 03:12:10.540449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.021 [2024-07-23 03:12:10.540465] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.021 [2024-07-23 03:12:10.540479] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.021 [2024-07-23 03:12:10.540491] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.021 [2024-07-23 03:12:10.540569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.021 [2024-07-23 03:12:10.540637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.021 [2024-07-23 03:12:10.540671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.021 [2024-07-23 03:12:10.540673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:44.280 "tick_rate": 2700000000, 00:12:44.280 "poll_groups": [ 00:12:44.280 { 00:12:44.280 "name": "nvmf_tgt_poll_group_000", 00:12:44.280 "admin_qpairs": 0, 00:12:44.280 "io_qpairs": 0, 00:12:44.280 "current_admin_qpairs": 0, 00:12:44.280 "current_io_qpairs": 0, 00:12:44.280 "pending_bdev_io": 0, 00:12:44.280 "completed_nvme_io": 0, 00:12:44.280 "transports": [] 00:12:44.280 }, 00:12:44.280 { 00:12:44.280 "name": "nvmf_tgt_poll_group_001", 00:12:44.280 "admin_qpairs": 0, 00:12:44.280 "io_qpairs": 0, 00:12:44.280 "current_admin_qpairs": 0, 00:12:44.280 "current_io_qpairs": 0, 00:12:44.280 "pending_bdev_io": 0, 00:12:44.280 "completed_nvme_io": 0, 00:12:44.280 "transports": [] 00:12:44.280 }, 00:12:44.280 { 00:12:44.280 "name": "nvmf_tgt_poll_group_002", 00:12:44.280 "admin_qpairs": 0, 00:12:44.280 "io_qpairs": 0, 00:12:44.280 "current_admin_qpairs": 0, 00:12:44.280 "current_io_qpairs": 0, 00:12:44.280 "pending_bdev_io": 0, 00:12:44.280 "completed_nvme_io": 0, 00:12:44.280 "transports": [] 00:12:44.280 }, 00:12:44.280 { 00:12:44.280 "name": "nvmf_tgt_poll_group_003", 00:12:44.280 "admin_qpairs": 0, 00:12:44.280 "io_qpairs": 0, 00:12:44.280 "current_admin_qpairs": 0, 00:12:44.280 "current_io_qpairs": 0, 00:12:44.280 "pending_bdev_io": 0, 00:12:44.280 "completed_nvme_io": 0, 00:12:44.280 "transports": [] 00:12:44.280 } 00:12:44.280 ] 00:12:44.280 }' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.280 [2024-07-23 03:12:10.774729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:44.280 "tick_rate": 2700000000, 00:12:44.280 "poll_groups": [ 00:12:44.280 { 00:12:44.280 "name": "nvmf_tgt_poll_group_000", 00:12:44.280 "admin_qpairs": 0, 00:12:44.280 "io_qpairs": 0, 00:12:44.280 "current_admin_qpairs": 0, 00:12:44.280 "current_io_qpairs": 0, 00:12:44.280 "pending_bdev_io": 0, 00:12:44.280 "completed_nvme_io": 0, 00:12:44.280 "transports": [ 00:12:44.280 { 00:12:44.280 "trtype": "TCP" 00:12:44.280 } 00:12:44.280 ] 00:12:44.280 }, 00:12:44.280 { 00:12:44.280 "name": "nvmf_tgt_poll_group_001", 00:12:44.280 "admin_qpairs": 0, 00:12:44.280 "io_qpairs": 0, 00:12:44.280 "current_admin_qpairs": 0, 00:12:44.280 "current_io_qpairs": 0, 00:12:44.280 "pending_bdev_io": 0, 00:12:44.280 "completed_nvme_io": 0, 00:12:44.280 "transports": [ 00:12:44.280 { 00:12:44.280 "trtype": "TCP" 00:12:44.280 } 00:12:44.280 ] 00:12:44.280 }, 00:12:44.280 { 00:12:44.280 "name": "nvmf_tgt_poll_group_002", 00:12:44.280 "admin_qpairs": 0, 00:12:44.280 "io_qpairs": 0, 00:12:44.280 "current_admin_qpairs": 0, 00:12:44.280 "current_io_qpairs": 0, 00:12:44.280 "pending_bdev_io": 0, 00:12:44.280 "completed_nvme_io": 0, 00:12:44.280 "transports": [ 00:12:44.280 { 00:12:44.280 "trtype": "TCP" 00:12:44.280 } 00:12:44.280 ] 00:12:44.280 }, 00:12:44.280 { 00:12:44.280 "name": "nvmf_tgt_poll_group_003", 00:12:44.280 "admin_qpairs": 0, 00:12:44.280 "io_qpairs": 0, 00:12:44.280 "current_admin_qpairs": 0, 00:12:44.280 "current_io_qpairs": 0, 00:12:44.280 "pending_bdev_io": 0, 00:12:44.280 "completed_nvme_io": 0, 00:12:44.280 "transports": [ 00:12:44.280 { 00:12:44.280 "trtype": "TCP" 00:12:44.280 } 00:12:44.280 ] 00:12:44.280 } 00:12:44.280 ] 00:12:44.280 }' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:44.280 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.542 Malloc1 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.542 [2024-07-23 03:12:10.913760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:44.542 [2024-07-23 03:12:10.936334] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:44.542 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:44.542 could not add new controller: failed to write to nvme-fabrics device 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.542 03:12:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.108 03:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.108 03:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:45.108 03:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.108 03:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:45.108 03:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.663 [2024-07-23 03:12:13.795697] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:47.663 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:47.663 could not add new controller: failed to write to nvme-fabrics device 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.663 03:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.230 03:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.230 03:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:48.230 03:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.230 03:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:48.230 03:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.129 [2024-07-23 03:12:16.619764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.129 03:12:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.696 03:12:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.696 03:12:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:50.696 03:12:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.696 03:12:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:50.696 03:12:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.223 [2024-07-23 03:12:19.363881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.223 03:12:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.788 03:12:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.788 03:12:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:53.788 03:12:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.788 03:12:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:53.788 03:12:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.686 [2024-07-23 03:12:22.181100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.686 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.617 03:12:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.617 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:56.617 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.617 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:56.617 03:12:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.512 [2024-07-23 03:12:24.988960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.512 03:12:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.512 03:12:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.512 03:12:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.512 03:12:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.512 03:12:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.512 03:12:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.512 03:12:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.077 03:12:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.077 03:12:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:59.077 03:12:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.077 03:12:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:59.077 03:12:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.605 [2024-07-23 03:12:27.767655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.605 03:12:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.171 03:12:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.171 03:12:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:02.171 03:12:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.171 03:12:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:02.171 03:12:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.070 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.071 [2024-07-23 03:12:30.589551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.071 [2024-07-23 03:12:30.637642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.071 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 [2024-07-23 03:12:30.685813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.329 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 [2024-07-23 03:12:30.733982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 [2024-07-23 03:12:30.782121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:04.330 "tick_rate": 2700000000, 00:13:04.330 "poll_groups": [ 00:13:04.330 { 00:13:04.330 "name": "nvmf_tgt_poll_group_000", 00:13:04.330 "admin_qpairs": 2, 00:13:04.330 "io_qpairs": 84, 00:13:04.330 "current_admin_qpairs": 0, 00:13:04.330 "current_io_qpairs": 0, 00:13:04.330 "pending_bdev_io": 0, 00:13:04.330 "completed_nvme_io": 139, 00:13:04.330 "transports": [ 00:13:04.330 { 00:13:04.330 "trtype": "TCP" 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "name": "nvmf_tgt_poll_group_001", 00:13:04.330 "admin_qpairs": 2, 00:13:04.330 "io_qpairs": 84, 00:13:04.330 "current_admin_qpairs": 0, 00:13:04.330 "current_io_qpairs": 0, 00:13:04.330 "pending_bdev_io": 0, 00:13:04.330 "completed_nvme_io": 232, 00:13:04.330 "transports": [ 00:13:04.330 { 00:13:04.330 "trtype": "TCP" 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "name": "nvmf_tgt_poll_group_002", 00:13:04.330 "admin_qpairs": 1, 00:13:04.330 "io_qpairs": 84, 00:13:04.330 "current_admin_qpairs": 0, 00:13:04.330 "current_io_qpairs": 0, 00:13:04.330 "pending_bdev_io": 0, 00:13:04.330 "completed_nvme_io": 154, 00:13:04.330 "transports": [ 00:13:04.330 { 00:13:04.330 "trtype": "TCP" 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "name": "nvmf_tgt_poll_group_003", 00:13:04.330 "admin_qpairs": 2, 00:13:04.330 "io_qpairs": 84, 00:13:04.330 "current_admin_qpairs": 0, 00:13:04.330 "current_io_qpairs": 0, 00:13:04.330 "pending_bdev_io": 0, 00:13:04.330 "completed_nvme_io": 161, 00:13:04.330 "transports": [ 00:13:04.330 { 00:13:04.330 "trtype": "TCP" 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 }' 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:04.330 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.589 rmmod nvme_tcp 00:13:04.589 rmmod nvme_fabrics 00:13:04.589 rmmod nvme_keyring 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 370367 ']' 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 370367 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 370367 ']' 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 370367 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 370367 00:13:04.589 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.590 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.590 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 370367' 00:13:04.590 killing process with pid 370367 00:13:04.590 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 370367 00:13:04.590 03:12:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 370367 00:13:04.847 03:12:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.847 03:12:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.847 03:12:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.847 03:12:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.847 03:12:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.847 03:12:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.847 03:12:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.847 03:12:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.752 03:12:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.752 00:13:06.752 real 0m25.151s 00:13:06.752 user 1m21.988s 00:13:06.752 sys 0m3.960s 00:13:06.752 03:12:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.752 03:12:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.752 ************************************ 00:13:06.752 END TEST nvmf_rpc 00:13:06.752 ************************************ 00:13:06.752 03:12:33 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:06.752 03:12:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:06.752 03:12:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.752 03:12:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:07.014 ************************************ 00:13:07.014 START TEST nvmf_invalid 00:13:07.014 ************************************ 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:07.014 * Looking for test storage... 00:13:07.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.014 03:12:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.956 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:08.957 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:08.957 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:08.957 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:08.957 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.957 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:13:09.216 00:13:09.216 --- 10.0.0.2 ping statistics --- 00:13:09.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.216 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:13:09.216 00:13:09.216 --- 10.0.0.1 ping statistics --- 00:13:09.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.216 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.216 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=374866 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 374866 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 374866 ']' 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:09.217 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.217 [2024-07-23 03:12:35.700197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:09.217 [2024-07-23 03:12:35.700280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.217 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.217 [2024-07-23 03:12:35.767975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.475 [2024-07-23 03:12:35.862501] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.475 [2024-07-23 03:12:35.862564] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.475 [2024-07-23 03:12:35.862587] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.475 [2024-07-23 03:12:35.862602] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.475 [2024-07-23 03:12:35.862621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.475 [2024-07-23 03:12:35.862690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.475 [2024-07-23 03:12:35.862746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.475 [2024-07-23 03:12:35.862798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.475 [2024-07-23 03:12:35.862801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.475 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:09.475 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:09.475 03:12:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.475 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.475 03:12:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.475 03:12:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.475 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:09.475 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1051 00:13:09.733 [2024-07-23 03:12:36.231129] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:09.733 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:09.733 { 00:13:09.733 "nqn": "nqn.2016-06.io.spdk:cnode1051", 00:13:09.733 "tgt_name": "foobar", 00:13:09.733 "method": "nvmf_create_subsystem", 00:13:09.733 "req_id": 1 00:13:09.733 } 00:13:09.733 Got JSON-RPC error response 00:13:09.733 response: 00:13:09.733 { 00:13:09.733 "code": -32603, 00:13:09.733 "message": "Unable to find target foobar" 00:13:09.733 }' 00:13:09.733 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:09.733 { 00:13:09.733 "nqn": "nqn.2016-06.io.spdk:cnode1051", 00:13:09.733 "tgt_name": "foobar", 00:13:09.733 "method": "nvmf_create_subsystem", 00:13:09.733 "req_id": 1 00:13:09.733 } 00:13:09.733 Got JSON-RPC error response 00:13:09.733 response: 00:13:09.733 { 00:13:09.733 "code": -32603, 00:13:09.733 "message": "Unable to find target foobar" 00:13:09.733 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:09.733 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:09.733 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29729 00:13:09.991 [2024-07-23 03:12:36.524094] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29729: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:09.991 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:09.991 { 00:13:09.991 "nqn": "nqn.2016-06.io.spdk:cnode29729", 00:13:09.991 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:09.991 "method": "nvmf_create_subsystem", 00:13:09.991 "req_id": 1 00:13:09.991 } 00:13:09.991 Got JSON-RPC error response 00:13:09.991 response: 00:13:09.991 { 00:13:09.991 "code": -32602, 00:13:09.991 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:09.991 }' 00:13:09.991 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:09.991 { 00:13:09.991 "nqn": "nqn.2016-06.io.spdk:cnode29729", 00:13:09.991 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:09.991 "method": "nvmf_create_subsystem", 00:13:09.991 "req_id": 1 00:13:09.991 } 00:13:09.991 Got JSON-RPC error response 00:13:09.991 response: 00:13:09.991 { 00:13:09.991 "code": -32602, 00:13:09.991 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:09.991 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:09.991 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:09.991 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15775 00:13:10.249 [2024-07-23 03:12:36.768884] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15775: invalid model number 'SPDK_Controller' 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:10.249 { 00:13:10.249 "nqn": "nqn.2016-06.io.spdk:cnode15775", 00:13:10.249 "model_number": "SPDK_Controller\u001f", 00:13:10.249 "method": "nvmf_create_subsystem", 00:13:10.249 "req_id": 1 00:13:10.249 } 00:13:10.249 Got JSON-RPC error response 00:13:10.249 response: 00:13:10.249 { 00:13:10.249 "code": -32602, 00:13:10.249 "message": "Invalid MN SPDK_Controller\u001f" 00:13:10.249 }' 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:10.249 { 00:13:10.249 "nqn": "nqn.2016-06.io.spdk:cnode15775", 00:13:10.249 "model_number": "SPDK_Controller\u001f", 00:13:10.249 "method": "nvmf_create_subsystem", 00:13:10.249 "req_id": 1 00:13:10.249 } 00:13:10.249 Got JSON-RPC error response 00:13:10.249 response: 00:13:10.249 { 00:13:10.249 "code": -32602, 00:13:10.249 "message": "Invalid MN SPDK_Controller\u001f" 00:13:10.249 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:10.249 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.250 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'eUF9Uk>"^0Ag1.T(5&<[h' 00:13:10.508 03:12:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'eUF9Uk>"^0Ag1.T(5&<[h' nqn.2016-06.io.spdk:cnode10263 00:13:10.508 [2024-07-23 03:12:37.077980] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10263: invalid serial number 'eUF9Uk>"^0Ag1.T(5&<[h' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:10.767 { 00:13:10.767 "nqn": "nqn.2016-06.io.spdk:cnode10263", 00:13:10.767 "serial_number": "eUF9Uk>\"^0Ag1.T(5&<[h", 00:13:10.767 "method": "nvmf_create_subsystem", 00:13:10.767 "req_id": 1 00:13:10.767 } 00:13:10.767 Got JSON-RPC error response 00:13:10.767 response: 00:13:10.767 { 00:13:10.767 "code": -32602, 00:13:10.767 "message": "Invalid SN eUF9Uk>\"^0Ag1.T(5&<[h" 00:13:10.767 }' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:10.767 { 00:13:10.767 "nqn": "nqn.2016-06.io.spdk:cnode10263", 00:13:10.767 "serial_number": "eUF9Uk>\"^0Ag1.T(5&<[h", 00:13:10.767 "method": "nvmf_create_subsystem", 00:13:10.767 "req_id": 1 00:13:10.767 } 00:13:10.767 Got JSON-RPC error response 00:13:10.767 response: 00:13:10.767 { 00:13:10.767 "code": -32602, 00:13:10.767 "message": "Invalid SN eUF9Uk>\"^0Ag1.T(5&<[h" 00:13:10.767 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.767 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.768 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:10.769 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:10.769 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:10.769 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.769 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.769 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:13:10.769 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '*6Qq\?^*ynAB|a`F5 = 8F\Kb}/E49ODc2!;Sb=+|' 00:13:10.769 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '*6Qq\?^*ynAB|a`F5 = 8F\Kb}/E49ODc2!;Sb=+|' nqn.2016-06.io.spdk:cnode32530 00:13:11.026 [2024-07-23 03:12:37.447135] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32530: invalid model number '*6Qq\?^*ynAB|a`F5 = 8F\Kb}/E49ODc2!;Sb=+|' 00:13:11.026 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:11.026 { 00:13:11.026 "nqn": "nqn.2016-06.io.spdk:cnode32530", 00:13:11.026 "model_number": "*6Qq\\?^*ynAB|a`F5 = 8F\\Kb}/E49ODc2!;Sb=+|", 00:13:11.026 "method": "nvmf_create_subsystem", 00:13:11.026 "req_id": 1 00:13:11.026 } 00:13:11.026 Got JSON-RPC error response 00:13:11.026 response: 00:13:11.026 { 00:13:11.026 "code": -32602, 00:13:11.027 "message": "Invalid MN *6Qq\\?^*ynAB|a`F5 = 8F\\Kb}/E49ODc2!;Sb=+|" 00:13:11.027 }' 00:13:11.027 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:11.027 { 00:13:11.027 "nqn": "nqn.2016-06.io.spdk:cnode32530", 00:13:11.027 "model_number": "*6Qq\\?^*ynAB|a`F5 = 8F\\Kb}/E49ODc2!;Sb=+|", 00:13:11.027 "method": "nvmf_create_subsystem", 00:13:11.027 "req_id": 1 00:13:11.027 } 00:13:11.027 Got JSON-RPC error response 00:13:11.027 response: 00:13:11.027 { 00:13:11.027 "code": -32602, 00:13:11.027 "message": "Invalid MN *6Qq\\?^*ynAB|a`F5 = 8F\\Kb}/E49ODc2!;Sb=+|" 00:13:11.027 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:11.027 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:11.284 [2024-07-23 03:12:37.708050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.284 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:11.542 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:11.542 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:11.542 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:11.542 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:11.542 03:12:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:11.799 [2024-07-23 03:12:38.209733] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:11.799 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:11.799 { 00:13:11.799 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.799 "listen_address": { 00:13:11.799 "trtype": "tcp", 00:13:11.799 "traddr": "", 00:13:11.799 "trsvcid": "4421" 00:13:11.799 }, 00:13:11.799 "method": "nvmf_subsystem_remove_listener", 00:13:11.799 "req_id": 1 00:13:11.799 } 00:13:11.799 Got JSON-RPC error response 00:13:11.799 response: 00:13:11.799 { 00:13:11.799 "code": -32602, 00:13:11.799 "message": "Invalid parameters" 00:13:11.799 }' 00:13:11.799 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:11.799 { 00:13:11.799 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.799 "listen_address": { 00:13:11.799 "trtype": "tcp", 00:13:11.799 "traddr": "", 00:13:11.799 "trsvcid": "4421" 00:13:11.799 }, 00:13:11.799 "method": "nvmf_subsystem_remove_listener", 00:13:11.799 "req_id": 1 00:13:11.799 } 00:13:11.799 Got JSON-RPC error response 00:13:11.799 response: 00:13:11.799 { 00:13:11.799 "code": -32602, 00:13:11.799 "message": "Invalid parameters" 00:13:11.799 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:11.799 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6795 -i 0 00:13:12.058 [2024-07-23 03:12:38.462466] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6795: invalid cntlid range [0-65519] 00:13:12.058 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:12.058 { 00:13:12.058 "nqn": "nqn.2016-06.io.spdk:cnode6795", 00:13:12.058 "min_cntlid": 0, 00:13:12.058 "method": "nvmf_create_subsystem", 00:13:12.058 "req_id": 1 00:13:12.058 } 00:13:12.058 Got JSON-RPC error response 00:13:12.058 response: 00:13:12.058 { 00:13:12.058 "code": -32602, 00:13:12.058 "message": "Invalid cntlid range [0-65519]" 00:13:12.058 }' 00:13:12.058 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:12.058 { 00:13:12.058 "nqn": "nqn.2016-06.io.spdk:cnode6795", 00:13:12.058 "min_cntlid": 0, 00:13:12.058 "method": "nvmf_create_subsystem", 00:13:12.058 "req_id": 1 00:13:12.058 } 00:13:12.058 Got JSON-RPC error response 00:13:12.058 response: 00:13:12.058 { 00:13:12.058 "code": -32602, 00:13:12.058 "message": "Invalid cntlid range [0-65519]" 00:13:12.058 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.058 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28917 -i 65520 00:13:12.316 [2024-07-23 03:12:38.711393] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28917: invalid cntlid range [65520-65519] 00:13:12.316 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:12.316 { 00:13:12.316 "nqn": "nqn.2016-06.io.spdk:cnode28917", 00:13:12.316 "min_cntlid": 65520, 00:13:12.316 "method": "nvmf_create_subsystem", 00:13:12.316 "req_id": 1 00:13:12.316 } 00:13:12.316 Got JSON-RPC error response 00:13:12.316 response: 00:13:12.316 { 00:13:12.316 "code": -32602, 00:13:12.316 "message": "Invalid cntlid range [65520-65519]" 00:13:12.316 }' 00:13:12.316 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:12.316 { 00:13:12.316 "nqn": "nqn.2016-06.io.spdk:cnode28917", 00:13:12.316 "min_cntlid": 65520, 00:13:12.316 "method": "nvmf_create_subsystem", 00:13:12.316 "req_id": 1 00:13:12.316 } 00:13:12.316 Got JSON-RPC error response 00:13:12.316 response: 00:13:12.316 { 00:13:12.316 "code": -32602, 00:13:12.316 "message": "Invalid cntlid range [65520-65519]" 00:13:12.316 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.316 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20958 -I 0 00:13:12.573 [2024-07-23 03:12:38.960270] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20958: invalid cntlid range [1-0] 00:13:12.573 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:12.573 { 00:13:12.573 "nqn": "nqn.2016-06.io.spdk:cnode20958", 00:13:12.573 "max_cntlid": 0, 00:13:12.573 "method": "nvmf_create_subsystem", 00:13:12.573 "req_id": 1 00:13:12.573 } 00:13:12.573 Got JSON-RPC error response 00:13:12.573 response: 00:13:12.573 { 00:13:12.573 "code": -32602, 00:13:12.573 "message": "Invalid cntlid range [1-0]" 00:13:12.573 }' 00:13:12.573 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:12.573 { 00:13:12.573 "nqn": "nqn.2016-06.io.spdk:cnode20958", 00:13:12.573 "max_cntlid": 0, 00:13:12.573 "method": "nvmf_create_subsystem", 00:13:12.573 "req_id": 1 00:13:12.573 } 00:13:12.573 Got JSON-RPC error response 00:13:12.573 response: 00:13:12.573 { 00:13:12.573 "code": -32602, 00:13:12.573 "message": "Invalid cntlid range [1-0]" 00:13:12.573 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.573 03:12:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9610 -I 65520 00:13:12.831 [2024-07-23 03:12:39.201035] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9610: invalid cntlid range [1-65520] 00:13:12.831 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:12.831 { 00:13:12.831 "nqn": "nqn.2016-06.io.spdk:cnode9610", 00:13:12.831 "max_cntlid": 65520, 00:13:12.831 "method": "nvmf_create_subsystem", 00:13:12.831 "req_id": 1 00:13:12.831 } 00:13:12.831 Got JSON-RPC error response 00:13:12.831 response: 00:13:12.831 { 00:13:12.831 "code": -32602, 00:13:12.831 "message": "Invalid cntlid range [1-65520]" 00:13:12.831 }' 00:13:12.831 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:12.831 { 00:13:12.831 "nqn": "nqn.2016-06.io.spdk:cnode9610", 00:13:12.831 "max_cntlid": 65520, 00:13:12.831 "method": "nvmf_create_subsystem", 00:13:12.831 "req_id": 1 00:13:12.831 } 00:13:12.831 Got JSON-RPC error response 00:13:12.831 response: 00:13:12.831 { 00:13:12.831 "code": -32602, 00:13:12.831 "message": "Invalid cntlid range [1-65520]" 00:13:12.831 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.831 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5756 -i 6 -I 5 00:13:13.089 [2024-07-23 03:12:39.445833] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5756: invalid cntlid range [6-5] 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:13.089 { 00:13:13.089 "nqn": "nqn.2016-06.io.spdk:cnode5756", 00:13:13.089 "min_cntlid": 6, 00:13:13.089 "max_cntlid": 5, 00:13:13.089 "method": "nvmf_create_subsystem", 00:13:13.089 "req_id": 1 00:13:13.089 } 00:13:13.089 Got JSON-RPC error response 00:13:13.089 response: 00:13:13.089 { 00:13:13.089 "code": -32602, 00:13:13.089 "message": "Invalid cntlid range [6-5]" 00:13:13.089 }' 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:13.089 { 00:13:13.089 "nqn": "nqn.2016-06.io.spdk:cnode5756", 00:13:13.089 "min_cntlid": 6, 00:13:13.089 "max_cntlid": 5, 00:13:13.089 "method": "nvmf_create_subsystem", 00:13:13.089 "req_id": 1 00:13:13.089 } 00:13:13.089 Got JSON-RPC error response 00:13:13.089 response: 00:13:13.089 { 00:13:13.089 "code": -32602, 00:13:13.089 "message": "Invalid cntlid range [6-5]" 00:13:13.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:13.089 { 00:13:13.089 "name": "foobar", 00:13:13.089 "method": "nvmf_delete_target", 00:13:13.089 "req_id": 1 00:13:13.089 } 00:13:13.089 Got JSON-RPC error response 00:13:13.089 response: 00:13:13.089 { 00:13:13.089 "code": -32602, 00:13:13.089 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:13.089 }' 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:13.089 { 00:13:13.089 "name": "foobar", 00:13:13.089 "method": "nvmf_delete_target", 00:13:13.089 "req_id": 1 00:13:13.089 } 00:13:13.089 Got JSON-RPC error response 00:13:13.089 response: 00:13:13.089 { 00:13:13.089 "code": -32602, 00:13:13.089 "message": "The specified target doesn't exist, cannot delete it." 00:13:13.089 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.089 rmmod nvme_tcp 00:13:13.089 rmmod nvme_fabrics 00:13:13.089 rmmod nvme_keyring 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 374866 ']' 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 374866 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 374866 ']' 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 374866 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 374866 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 374866' 00:13:13.089 killing process with pid 374866 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 374866 00:13:13.089 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 374866 00:13:13.348 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.348 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.348 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.348 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.348 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.348 03:12:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.348 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.348 03:12:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.886 03:12:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.886 00:13:15.886 real 0m8.582s 00:13:15.886 user 0m19.644s 00:13:15.886 sys 0m2.433s 00:13:15.886 03:12:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.886 03:12:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.886 ************************************ 00:13:15.886 END TEST nvmf_invalid 00:13:15.886 ************************************ 00:13:15.886 03:12:41 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:15.886 03:12:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:15.886 03:12:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.886 03:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.886 ************************************ 00:13:15.886 START TEST nvmf_abort 00:13:15.886 ************************************ 00:13:15.886 03:12:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:15.886 * Looking for test storage... 00:13:15.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.886 03:12:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.803 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.804 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.804 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.804 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.804 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:13:17.804 00:13:17.804 --- 10.0.0.2 ping statistics --- 00:13:17.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.804 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:13:17.804 00:13:17.804 --- 10.0.0.1 ping statistics --- 00:13:17.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.804 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=377384 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 377384 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 377384 ']' 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:17.804 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:17.804 [2024-07-23 03:12:44.272636] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:17.804 [2024-07-23 03:12:44.272726] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.804 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.804 [2024-07-23 03:12:44.343336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.063 [2024-07-23 03:12:44.439720] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.063 [2024-07-23 03:12:44.439787] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.063 [2024-07-23 03:12:44.439804] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.063 [2024-07-23 03:12:44.439818] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.063 [2024-07-23 03:12:44.439830] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.063 [2024-07-23 03:12:44.439935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.063 [2024-07-23 03:12:44.439990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.063 [2024-07-23 03:12:44.439993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.063 [2024-07-23 03:12:44.590223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.063 Malloc0 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.063 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.321 Delay0 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.321 [2024-07-23 03:12:44.661923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.321 03:12:44 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:18.321 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.322 [2024-07-23 03:12:44.769172] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:20.851 Initializing NVMe Controllers 00:13:20.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:20.852 controller IO queue size 128 less than required 00:13:20.852 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:20.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:20.852 Initialization complete. Launching workers. 00:13:20.852 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33090 00:13:20.852 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33155, failed to submit 62 00:13:20.852 success 33094, unsuccess 61, failed 0 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:20.852 rmmod nvme_tcp 00:13:20.852 rmmod nvme_fabrics 00:13:20.852 rmmod nvme_keyring 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 377384 ']' 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 377384 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 377384 ']' 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 377384 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 377384 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 377384' 00:13:20.852 killing process with pid 377384 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 377384 00:13:20.852 03:12:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 377384 00:13:20.852 03:12:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.852 03:12:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.852 03:12:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.852 03:12:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.852 03:12:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.852 03:12:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.852 03:12:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.852 03:12:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.755 03:12:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:22.755 00:13:22.755 real 0m7.262s 00:13:22.755 user 0m10.357s 00:13:22.755 sys 0m2.616s 00:13:22.755 03:12:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.755 03:12:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.755 ************************************ 00:13:22.755 END TEST nvmf_abort 00:13:22.755 ************************************ 00:13:22.755 03:12:49 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:22.755 03:12:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:22.756 03:12:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:22.756 03:12:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:22.756 ************************************ 00:13:22.756 START TEST nvmf_ns_hotplug_stress 00:13:22.756 ************************************ 00:13:22.756 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:23.015 * Looking for test storage... 00:13:23.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:23.015 03:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:24.919 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:24.920 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:24.920 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:24.920 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:24.920 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:24.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:13:24.920 00:13:24.920 --- 10.0.0.2 ping statistics --- 00:13:24.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.920 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:13:24.920 00:13:24.920 --- 10.0.0.1 ping statistics --- 00:13:24.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.920 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:24.920 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=379716 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 379716 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 379716 ']' 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:25.216 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.216 [2024-07-23 03:12:51.563548] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:25.216 [2024-07-23 03:12:51.563640] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.216 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.216 [2024-07-23 03:12:51.633320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.216 [2024-07-23 03:12:51.726389] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.216 [2024-07-23 03:12:51.726454] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.216 [2024-07-23 03:12:51.726470] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.216 [2024-07-23 03:12:51.726483] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.216 [2024-07-23 03:12:51.726495] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.216 [2024-07-23 03:12:51.726578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.216 [2024-07-23 03:12:51.726642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.216 [2024-07-23 03:12:51.726648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.475 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:25.475 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:25.475 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.475 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.475 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.475 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.475 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:25.475 03:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:25.733 [2024-07-23 03:12:52.130643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.733 03:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:25.991 03:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.248 [2024-07-23 03:12:52.641351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.248 03:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.506 03:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:26.764 Malloc0 00:13:26.764 03:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:27.020 Delay0 00:13:27.020 03:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.277 03:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:27.534 NULL1 00:13:27.534 03:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:27.791 03:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=380021 00:13:27.791 03:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:27.791 03:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:27.791 03:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.048 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.979 Read completed with error (sct=0, sc=11) 00:13:28.979 03:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.494 03:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:29.494 03:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:29.751 true 00:13:29.751 03:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:29.751 03:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.314 03:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.571 03:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:30.571 03:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:30.828 true 00:13:30.828 03:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:30.828 03:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.084 03:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.648 03:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:31.648 03:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:31.648 true 00:13:31.648 03:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:31.648 03:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.905 03:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.162 03:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:32.162 03:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:32.419 true 00:13:32.419 03:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:32.419 03:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.790 03:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.790 03:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:33.790 03:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:34.047 true 00:13:34.047 03:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:34.047 03:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.304 03:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.562 03:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:34.562 03:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:34.819 true 00:13:34.819 03:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:34.819 03:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.751 03:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.009 03:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:36.009 03:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:36.266 true 00:13:36.266 03:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:36.266 03:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.524 03:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.782 03:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:36.782 03:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:36.782 true 00:13:37.039 03:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:37.039 03:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.969 03:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.226 03:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:38.226 03:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:38.483 true 00:13:38.483 03:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:38.483 03:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.741 03:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.998 03:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:38.998 03:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:38.998 true 00:13:38.998 03:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:38.998 03:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.255 03:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.512 03:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:39.512 03:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:39.770 true 00:13:39.770 03:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:39.770 03:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.175 03:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.175 03:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:41.175 03:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:41.433 true 00:13:41.433 03:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:41.433 03:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.366 03:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.366 03:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:42.366 03:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:42.624 true 00:13:42.624 03:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:42.624 03:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.881 03:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.139 03:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:43.139 03:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:43.397 true 00:13:43.397 03:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:43.397 03:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.330 03:13:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.587 03:13:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:44.587 03:13:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:44.845 true 00:13:44.845 03:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:44.845 03:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.103 03:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.361 03:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:45.361 03:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:45.361 true 00:13:45.361 03:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:45.361 03:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.733 03:13:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.733 03:13:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:46.733 03:13:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:46.990 true 00:13:46.990 03:13:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:46.990 03:13:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.247 03:13:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.504 03:13:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:47.504 03:13:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:47.762 true 00:13:47.762 03:13:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:47.762 03:13:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.694 03:13:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.694 03:13:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:48.694 03:13:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:48.952 true 00:13:48.952 03:13:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:48.952 03:13:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.210 03:13:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.467 03:13:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:49.467 03:13:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:49.725 true 00:13:49.725 03:13:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:49.725 03:13:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.657 03:13:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.915 03:13:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:50.915 03:13:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:51.172 true 00:13:51.172 03:13:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:51.172 03:13:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.430 03:13:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.687 03:13:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:51.687 03:13:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:51.944 true 00:13:51.944 03:13:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:51.944 03:13:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.877 03:13:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.877 03:13:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:52.877 03:13:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:53.134 true 00:13:53.134 03:13:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:53.134 03:13:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.392 03:13:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.650 03:13:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:53.650 03:13:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:53.908 true 00:13:53.908 03:13:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:53.908 03:13:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.840 03:13:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.101 03:13:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:55.102 03:13:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:55.369 true 00:13:55.369 03:13:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:55.369 03:13:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.652 03:13:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.910 03:13:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:55.910 03:13:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:56.167 true 00:13:56.167 03:13:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:56.167 03:13:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.098 03:13:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.355 03:13:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:57.355 03:13:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:57.612 true 00:13:57.612 03:13:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:57.612 03:13:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.869 03:13:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.127 03:13:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:58.127 03:13:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:58.384 true 00:13:58.384 03:13:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:58.384 03:13:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.948 03:13:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.206 Initializing NVMe Controllers 00:13:59.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.206 Controller IO queue size 128, less than required. 00:13:59.206 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.206 Controller IO queue size 128, less than required. 00:13:59.206 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:59.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:59.206 Initialization complete. Launching workers. 00:13:59.206 ======================================================== 00:13:59.206 Latency(us) 00:13:59.206 Device Information : IOPS MiB/s Average min max 00:13:59.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 997.70 0.49 71978.51 2281.17 1012486.13 00:13:59.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8342.67 4.07 15342.46 5771.87 450231.28 00:13:59.206 ======================================================== 00:13:59.206 Total : 9340.37 4.56 21392.09 2281.17 1012486.13 00:13:59.206 00:13:59.206 03:13:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:59.206 03:13:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:59.463 true 00:13:59.463 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 380021 00:13:59.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (380021) - No such process 00:13:59.463 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 380021 00:13:59.463 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.720 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.978 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:59.978 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:59.978 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:59.978 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.978 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:00.235 null0 00:14:00.235 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.235 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.235 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:00.492 null1 00:14:00.492 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.492 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.492 03:13:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:00.749 null2 00:14:00.749 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.749 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.749 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:01.006 null3 00:14:01.006 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.006 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.006 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:01.264 null4 00:14:01.264 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.264 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.264 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:01.521 null5 00:14:01.521 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.521 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.521 03:13:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:01.779 null6 00:14:01.779 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.779 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.779 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:02.038 null7 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:02.038 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 384190 384191 384193 384195 384197 384199 384201 384203 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.039 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.297 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.297 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.297 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.297 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.297 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.297 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.298 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.298 03:13:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.556 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.815 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.815 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.815 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.815 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.815 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.815 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.815 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.815 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.074 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.332 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.332 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.332 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.332 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.332 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.332 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.333 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.333 03:13:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.591 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.592 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.592 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.850 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.850 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.850 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.850 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.850 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.850 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.850 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.850 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.108 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.366 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.366 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.366 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.366 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.366 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.366 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.366 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.625 03:13:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.625 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.884 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.143 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.143 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.143 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.143 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.143 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.143 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.143 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.143 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.401 03:13:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.660 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.660 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.660 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.660 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.660 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.660 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.660 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.660 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.918 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.176 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.176 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.176 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.176 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:06.176 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.176 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.176 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.176 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.434 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.435 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.435 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.435 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.435 03:13:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.692 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.692 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.692 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:06.692 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.692 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.692 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.692 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.692 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.952 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.210 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.210 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.210 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.210 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.210 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.210 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.210 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.210 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.467 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.468 03:13:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.468 rmmod nvme_tcp 00:14:07.468 rmmod nvme_fabrics 00:14:07.468 rmmod nvme_keyring 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 379716 ']' 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 379716 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 379716 ']' 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 379716 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 379716 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 379716' 00:14:07.468 killing process with pid 379716 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 379716 00:14:07.468 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 379716 00:14:07.727 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.727 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.727 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.727 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.727 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.727 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.727 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.727 03:13:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.267 03:13:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.267 00:14:10.267 real 0m47.006s 00:14:10.267 user 3m26.968s 00:14:10.267 sys 0m18.917s 00:14:10.267 03:13:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:10.267 03:13:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.267 ************************************ 00:14:10.267 END TEST nvmf_ns_hotplug_stress 00:14:10.267 ************************************ 00:14:10.267 03:13:36 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:10.267 03:13:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:10.267 03:13:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:10.267 03:13:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.267 ************************************ 00:14:10.267 START TEST nvmf_connect_stress 00:14:10.267 ************************************ 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:10.267 * Looking for test storage... 00:14:10.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.267 03:13:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.168 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.168 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.168 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:12.169 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:12.169 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:12.169 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:12.169 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:12.169 00:14:12.169 --- 10.0.0.2 ping statistics --- 00:14:12.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.169 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:14:12.169 00:14:12.169 --- 10.0.0.1 ping statistics --- 00:14:12.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.169 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.169 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=386940 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 386940 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 386940 ']' 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:12.170 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.170 [2024-07-23 03:13:38.478208] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:12.170 [2024-07-23 03:13:38.478279] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.170 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.170 [2024-07-23 03:13:38.548521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:12.170 [2024-07-23 03:13:38.647545] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.170 [2024-07-23 03:13:38.647620] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.170 [2024-07-23 03:13:38.647639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.170 [2024-07-23 03:13:38.647654] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.170 [2024-07-23 03:13:38.647665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.170 [2024-07-23 03:13:38.647754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.170 [2024-07-23 03:13:38.647807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.170 [2024-07-23 03:13:38.647810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.428 [2024-07-23 03:13:38.796618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.428 [2024-07-23 03:13:38.829771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.428 NULL1 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=386974 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:12.428 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:12.429 03:13:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.429 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.429 03:13:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.686 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.686 03:13:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:12.686 03:13:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.686 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.686 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.250 03:13:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:13.250 03:13:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.250 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.250 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.508 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.508 03:13:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:13.508 03:13:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.508 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.508 03:13:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.765 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.765 03:13:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:13.765 03:13:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.765 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.765 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.023 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.023 03:13:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:14.023 03:13:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.023 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.023 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.280 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.280 03:13:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:14.280 03:13:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.280 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.280 03:13:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.845 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.845 03:13:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:14.845 03:13:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.845 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.845 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.102 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.102 03:13:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:15.102 03:13:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.102 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.102 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.360 03:13:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:15.360 03:13:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.360 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.360 03:13:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.617 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.617 03:13:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:15.617 03:13:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.617 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.617 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.875 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.875 03:13:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:15.875 03:13:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.875 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.875 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.439 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.439 03:13:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:16.439 03:13:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.439 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.439 03:13:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.696 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.696 03:13:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:16.696 03:13:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.696 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.696 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.952 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.952 03:13:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:16.952 03:13:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.952 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.952 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.210 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.210 03:13:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:17.210 03:13:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.210 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.210 03:13:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.467 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.467 03:13:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:17.467 03:13:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.467 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.467 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.031 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.031 03:13:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:18.031 03:13:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.031 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.031 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.288 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.288 03:13:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:18.288 03:13:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.288 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.288 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.546 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.546 03:13:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:18.546 03:13:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.546 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.546 03:13:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.803 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.803 03:13:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:18.803 03:13:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.803 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.803 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.367 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.367 03:13:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:19.367 03:13:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.367 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.367 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.625 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.625 03:13:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:19.625 03:13:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.625 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.625 03:13:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.882 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.882 03:13:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:19.882 03:13:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.882 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.882 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.139 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.139 03:13:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:20.139 03:13:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.139 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.139 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.397 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.397 03:13:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:20.397 03:13:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.397 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.397 03:13:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.962 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.962 03:13:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:20.962 03:13:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.962 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.962 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.220 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.220 03:13:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:21.220 03:13:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.220 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.220 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.478 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.478 03:13:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:21.478 03:13:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.478 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.478 03:13:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.735 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.735 03:13:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:21.735 03:13:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.735 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.735 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.993 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.993 03:13:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:21.993 03:13:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.993 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.993 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.558 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.558 03:13:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:22.558 03:13:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.558 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.558 03:13:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.558 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386974 00:14:22.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (386974) - No such process 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 386974 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.816 rmmod nvme_tcp 00:14:22.816 rmmod nvme_fabrics 00:14:22.816 rmmod nvme_keyring 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 386940 ']' 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 386940 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 386940 ']' 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 386940 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 386940 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 386940' 00:14:22.816 killing process with pid 386940 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 386940 00:14:22.816 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 386940 00:14:23.076 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:23.076 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:23.076 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:23.076 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.076 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.076 03:13:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.076 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.076 03:13:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.983 03:13:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.983 00:14:24.983 real 0m15.156s 00:14:24.983 user 0m38.093s 00:14:24.983 sys 0m5.944s 00:14:24.983 03:13:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:24.983 03:13:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.983 ************************************ 00:14:24.983 END TEST nvmf_connect_stress 00:14:24.983 ************************************ 00:14:24.983 03:13:51 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:24.983 03:13:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:24.983 03:13:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:24.983 03:13:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.983 ************************************ 00:14:24.983 START TEST nvmf_fused_ordering 00:14:24.983 ************************************ 00:14:24.983 03:13:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:25.242 * Looking for test storage... 00:14:25.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.242 03:13:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.174 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.174 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.174 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.174 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.174 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.174 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:27.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:27.175 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:27.175 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:27.175 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:14:27.175 00:14:27.175 --- 10.0.0.2 ping statistics --- 00:14:27.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.175 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:14:27.175 00:14:27.175 --- 10.0.0.1 ping statistics --- 00:14:27.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.175 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.175 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=390123 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 390123 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 390123 ']' 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.434 03:13:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.434 [2024-07-23 03:13:53.818518] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:27.434 [2024-07-23 03:13:53.818597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.434 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.434 [2024-07-23 03:13:53.885120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.434 [2024-07-23 03:13:53.975829] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.434 [2024-07-23 03:13:53.975893] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.434 [2024-07-23 03:13:53.975910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.434 [2024-07-23 03:13:53.975932] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.434 [2024-07-23 03:13:53.975944] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.434 [2024-07-23 03:13:53.975972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 [2024-07-23 03:13:54.125114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 [2024-07-23 03:13:54.141306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 NULL1 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.693 03:13:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:27.693 [2024-07-23 03:13:54.185998] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:27.693 [2024-07-23 03:13:54.186037] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390260 ] 00:14:27.693 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.625 Attached to nqn.2016-06.io.spdk:cnode1 00:14:28.625 Namespace ID: 1 size: 1GB 00:14:28.625 fused_ordering(0) 00:14:28.625 fused_ordering(1) 00:14:28.625 fused_ordering(2) 00:14:28.625 fused_ordering(3) 00:14:28.625 fused_ordering(4) 00:14:28.625 fused_ordering(5) 00:14:28.625 fused_ordering(6) 00:14:28.625 fused_ordering(7) 00:14:28.625 fused_ordering(8) 00:14:28.625 fused_ordering(9) 00:14:28.625 fused_ordering(10) 00:14:28.625 fused_ordering(11) 00:14:28.625 fused_ordering(12) 00:14:28.625 fused_ordering(13) 00:14:28.625 fused_ordering(14) 00:14:28.625 fused_ordering(15) 00:14:28.625 fused_ordering(16) 00:14:28.625 fused_ordering(17) 00:14:28.625 fused_ordering(18) 00:14:28.625 fused_ordering(19) 00:14:28.625 fused_ordering(20) 00:14:28.625 fused_ordering(21) 00:14:28.625 fused_ordering(22) 00:14:28.625 fused_ordering(23) 00:14:28.625 fused_ordering(24) 00:14:28.625 fused_ordering(25) 00:14:28.625 fused_ordering(26) 00:14:28.625 fused_ordering(27) 00:14:28.625 fused_ordering(28) 00:14:28.625 fused_ordering(29) 00:14:28.625 fused_ordering(30) 00:14:28.625 fused_ordering(31) 00:14:28.625 fused_ordering(32) 00:14:28.625 fused_ordering(33) 00:14:28.625 fused_ordering(34) 00:14:28.625 fused_ordering(35) 00:14:28.625 fused_ordering(36) 00:14:28.625 fused_ordering(37) 00:14:28.625 fused_ordering(38) 00:14:28.625 fused_ordering(39) 00:14:28.625 fused_ordering(40) 00:14:28.625 fused_ordering(41) 00:14:28.625 fused_ordering(42) 00:14:28.625 fused_ordering(43) 00:14:28.625 fused_ordering(44) 00:14:28.625 fused_ordering(45) 00:14:28.625 fused_ordering(46) 00:14:28.625 fused_ordering(47) 00:14:28.625 fused_ordering(48) 00:14:28.625 fused_ordering(49) 00:14:28.625 fused_ordering(50) 00:14:28.625 fused_ordering(51) 00:14:28.625 fused_ordering(52) 00:14:28.625 fused_ordering(53) 00:14:28.625 fused_ordering(54) 00:14:28.625 fused_ordering(55) 00:14:28.625 fused_ordering(56) 00:14:28.625 fused_ordering(57) 00:14:28.625 fused_ordering(58) 00:14:28.625 fused_ordering(59) 00:14:28.625 fused_ordering(60) 00:14:28.625 fused_ordering(61) 00:14:28.625 fused_ordering(62) 00:14:28.625 fused_ordering(63) 00:14:28.625 fused_ordering(64) 00:14:28.625 fused_ordering(65) 00:14:28.625 fused_ordering(66) 00:14:28.625 fused_ordering(67) 00:14:28.625 fused_ordering(68) 00:14:28.625 fused_ordering(69) 00:14:28.625 fused_ordering(70) 00:14:28.625 fused_ordering(71) 00:14:28.625 fused_ordering(72) 00:14:28.625 fused_ordering(73) 00:14:28.625 fused_ordering(74) 00:14:28.625 fused_ordering(75) 00:14:28.625 fused_ordering(76) 00:14:28.625 fused_ordering(77) 00:14:28.625 fused_ordering(78) 00:14:28.625 fused_ordering(79) 00:14:28.625 fused_ordering(80) 00:14:28.625 fused_ordering(81) 00:14:28.625 fused_ordering(82) 00:14:28.625 fused_ordering(83) 00:14:28.625 fused_ordering(84) 00:14:28.625 fused_ordering(85) 00:14:28.625 fused_ordering(86) 00:14:28.625 fused_ordering(87) 00:14:28.625 fused_ordering(88) 00:14:28.625 fused_ordering(89) 00:14:28.625 fused_ordering(90) 00:14:28.625 fused_ordering(91) 00:14:28.625 fused_ordering(92) 00:14:28.625 fused_ordering(93) 00:14:28.625 fused_ordering(94) 00:14:28.625 fused_ordering(95) 00:14:28.625 fused_ordering(96) 00:14:28.625 fused_ordering(97) 00:14:28.625 fused_ordering(98) 00:14:28.625 fused_ordering(99) 00:14:28.625 fused_ordering(100) 00:14:28.625 fused_ordering(101) 00:14:28.625 fused_ordering(102) 00:14:28.625 fused_ordering(103) 00:14:28.625 fused_ordering(104) 00:14:28.625 fused_ordering(105) 00:14:28.625 fused_ordering(106) 00:14:28.625 fused_ordering(107) 00:14:28.625 fused_ordering(108) 00:14:28.625 fused_ordering(109) 00:14:28.625 fused_ordering(110) 00:14:28.625 fused_ordering(111) 00:14:28.625 fused_ordering(112) 00:14:28.625 fused_ordering(113) 00:14:28.625 fused_ordering(114) 00:14:28.625 fused_ordering(115) 00:14:28.625 fused_ordering(116) 00:14:28.625 fused_ordering(117) 00:14:28.625 fused_ordering(118) 00:14:28.625 fused_ordering(119) 00:14:28.625 fused_ordering(120) 00:14:28.625 fused_ordering(121) 00:14:28.625 fused_ordering(122) 00:14:28.625 fused_ordering(123) 00:14:28.625 fused_ordering(124) 00:14:28.625 fused_ordering(125) 00:14:28.625 fused_ordering(126) 00:14:28.625 fused_ordering(127) 00:14:28.625 fused_ordering(128) 00:14:28.625 fused_ordering(129) 00:14:28.625 fused_ordering(130) 00:14:28.625 fused_ordering(131) 00:14:28.625 fused_ordering(132) 00:14:28.625 fused_ordering(133) 00:14:28.625 fused_ordering(134) 00:14:28.625 fused_ordering(135) 00:14:28.625 fused_ordering(136) 00:14:28.625 fused_ordering(137) 00:14:28.625 fused_ordering(138) 00:14:28.625 fused_ordering(139) 00:14:28.625 fused_ordering(140) 00:14:28.625 fused_ordering(141) 00:14:28.625 fused_ordering(142) 00:14:28.625 fused_ordering(143) 00:14:28.625 fused_ordering(144) 00:14:28.625 fused_ordering(145) 00:14:28.625 fused_ordering(146) 00:14:28.625 fused_ordering(147) 00:14:28.625 fused_ordering(148) 00:14:28.625 fused_ordering(149) 00:14:28.625 fused_ordering(150) 00:14:28.625 fused_ordering(151) 00:14:28.625 fused_ordering(152) 00:14:28.625 fused_ordering(153) 00:14:28.625 fused_ordering(154) 00:14:28.625 fused_ordering(155) 00:14:28.625 fused_ordering(156) 00:14:28.625 fused_ordering(157) 00:14:28.625 fused_ordering(158) 00:14:28.625 fused_ordering(159) 00:14:28.625 fused_ordering(160) 00:14:28.625 fused_ordering(161) 00:14:28.625 fused_ordering(162) 00:14:28.625 fused_ordering(163) 00:14:28.625 fused_ordering(164) 00:14:28.625 fused_ordering(165) 00:14:28.625 fused_ordering(166) 00:14:28.625 fused_ordering(167) 00:14:28.625 fused_ordering(168) 00:14:28.625 fused_ordering(169) 00:14:28.625 fused_ordering(170) 00:14:28.626 fused_ordering(171) 00:14:28.626 fused_ordering(172) 00:14:28.626 fused_ordering(173) 00:14:28.626 fused_ordering(174) 00:14:28.626 fused_ordering(175) 00:14:28.626 fused_ordering(176) 00:14:28.626 fused_ordering(177) 00:14:28.626 fused_ordering(178) 00:14:28.626 fused_ordering(179) 00:14:28.626 fused_ordering(180) 00:14:28.626 fused_ordering(181) 00:14:28.626 fused_ordering(182) 00:14:28.626 fused_ordering(183) 00:14:28.626 fused_ordering(184) 00:14:28.626 fused_ordering(185) 00:14:28.626 fused_ordering(186) 00:14:28.626 fused_ordering(187) 00:14:28.626 fused_ordering(188) 00:14:28.626 fused_ordering(189) 00:14:28.626 fused_ordering(190) 00:14:28.626 fused_ordering(191) 00:14:28.626 fused_ordering(192) 00:14:28.626 fused_ordering(193) 00:14:28.626 fused_ordering(194) 00:14:28.626 fused_ordering(195) 00:14:28.626 fused_ordering(196) 00:14:28.626 fused_ordering(197) 00:14:28.626 fused_ordering(198) 00:14:28.626 fused_ordering(199) 00:14:28.626 fused_ordering(200) 00:14:28.626 fused_ordering(201) 00:14:28.626 fused_ordering(202) 00:14:28.626 fused_ordering(203) 00:14:28.626 fused_ordering(204) 00:14:28.626 fused_ordering(205) 00:14:28.883 fused_ordering(206) 00:14:28.883 fused_ordering(207) 00:14:28.883 fused_ordering(208) 00:14:28.883 fused_ordering(209) 00:14:28.883 fused_ordering(210) 00:14:28.883 fused_ordering(211) 00:14:28.883 fused_ordering(212) 00:14:28.883 fused_ordering(213) 00:14:28.883 fused_ordering(214) 00:14:28.883 fused_ordering(215) 00:14:28.883 fused_ordering(216) 00:14:28.883 fused_ordering(217) 00:14:28.883 fused_ordering(218) 00:14:28.883 fused_ordering(219) 00:14:28.883 fused_ordering(220) 00:14:28.883 fused_ordering(221) 00:14:28.883 fused_ordering(222) 00:14:28.883 fused_ordering(223) 00:14:28.883 fused_ordering(224) 00:14:28.883 fused_ordering(225) 00:14:28.883 fused_ordering(226) 00:14:28.883 fused_ordering(227) 00:14:28.883 fused_ordering(228) 00:14:28.883 fused_ordering(229) 00:14:28.883 fused_ordering(230) 00:14:28.883 fused_ordering(231) 00:14:28.883 fused_ordering(232) 00:14:28.883 fused_ordering(233) 00:14:28.883 fused_ordering(234) 00:14:28.883 fused_ordering(235) 00:14:28.883 fused_ordering(236) 00:14:28.883 fused_ordering(237) 00:14:28.883 fused_ordering(238) 00:14:28.883 fused_ordering(239) 00:14:28.883 fused_ordering(240) 00:14:28.883 fused_ordering(241) 00:14:28.883 fused_ordering(242) 00:14:28.883 fused_ordering(243) 00:14:28.883 fused_ordering(244) 00:14:28.883 fused_ordering(245) 00:14:28.883 fused_ordering(246) 00:14:28.883 fused_ordering(247) 00:14:28.883 fused_ordering(248) 00:14:28.883 fused_ordering(249) 00:14:28.883 fused_ordering(250) 00:14:28.883 fused_ordering(251) 00:14:28.883 fused_ordering(252) 00:14:28.883 fused_ordering(253) 00:14:28.883 fused_ordering(254) 00:14:28.883 fused_ordering(255) 00:14:28.883 fused_ordering(256) 00:14:28.883 fused_ordering(257) 00:14:28.883 fused_ordering(258) 00:14:28.883 fused_ordering(259) 00:14:28.883 fused_ordering(260) 00:14:28.883 fused_ordering(261) 00:14:28.883 fused_ordering(262) 00:14:28.883 fused_ordering(263) 00:14:28.883 fused_ordering(264) 00:14:28.883 fused_ordering(265) 00:14:28.883 fused_ordering(266) 00:14:28.883 fused_ordering(267) 00:14:28.883 fused_ordering(268) 00:14:28.883 fused_ordering(269) 00:14:28.883 fused_ordering(270) 00:14:28.883 fused_ordering(271) 00:14:28.883 fused_ordering(272) 00:14:28.883 fused_ordering(273) 00:14:28.883 fused_ordering(274) 00:14:28.883 fused_ordering(275) 00:14:28.883 fused_ordering(276) 00:14:28.883 fused_ordering(277) 00:14:28.883 fused_ordering(278) 00:14:28.883 fused_ordering(279) 00:14:28.883 fused_ordering(280) 00:14:28.883 fused_ordering(281) 00:14:28.883 fused_ordering(282) 00:14:28.883 fused_ordering(283) 00:14:28.883 fused_ordering(284) 00:14:28.883 fused_ordering(285) 00:14:28.883 fused_ordering(286) 00:14:28.883 fused_ordering(287) 00:14:28.883 fused_ordering(288) 00:14:28.883 fused_ordering(289) 00:14:28.883 fused_ordering(290) 00:14:28.883 fused_ordering(291) 00:14:28.883 fused_ordering(292) 00:14:28.883 fused_ordering(293) 00:14:28.883 fused_ordering(294) 00:14:28.883 fused_ordering(295) 00:14:28.883 fused_ordering(296) 00:14:28.883 fused_ordering(297) 00:14:28.883 fused_ordering(298) 00:14:28.883 fused_ordering(299) 00:14:28.883 fused_ordering(300) 00:14:28.883 fused_ordering(301) 00:14:28.883 fused_ordering(302) 00:14:28.883 fused_ordering(303) 00:14:28.883 fused_ordering(304) 00:14:28.883 fused_ordering(305) 00:14:28.883 fused_ordering(306) 00:14:28.883 fused_ordering(307) 00:14:28.883 fused_ordering(308) 00:14:28.883 fused_ordering(309) 00:14:28.883 fused_ordering(310) 00:14:28.883 fused_ordering(311) 00:14:28.883 fused_ordering(312) 00:14:28.883 fused_ordering(313) 00:14:28.883 fused_ordering(314) 00:14:28.883 fused_ordering(315) 00:14:28.883 fused_ordering(316) 00:14:28.883 fused_ordering(317) 00:14:28.883 fused_ordering(318) 00:14:28.883 fused_ordering(319) 00:14:28.883 fused_ordering(320) 00:14:28.883 fused_ordering(321) 00:14:28.883 fused_ordering(322) 00:14:28.883 fused_ordering(323) 00:14:28.883 fused_ordering(324) 00:14:28.883 fused_ordering(325) 00:14:28.883 fused_ordering(326) 00:14:28.883 fused_ordering(327) 00:14:28.883 fused_ordering(328) 00:14:28.883 fused_ordering(329) 00:14:28.883 fused_ordering(330) 00:14:28.883 fused_ordering(331) 00:14:28.883 fused_ordering(332) 00:14:28.883 fused_ordering(333) 00:14:28.883 fused_ordering(334) 00:14:28.883 fused_ordering(335) 00:14:28.883 fused_ordering(336) 00:14:28.883 fused_ordering(337) 00:14:28.883 fused_ordering(338) 00:14:28.883 fused_ordering(339) 00:14:28.884 fused_ordering(340) 00:14:28.884 fused_ordering(341) 00:14:28.884 fused_ordering(342) 00:14:28.884 fused_ordering(343) 00:14:28.884 fused_ordering(344) 00:14:28.884 fused_ordering(345) 00:14:28.884 fused_ordering(346) 00:14:28.884 fused_ordering(347) 00:14:28.884 fused_ordering(348) 00:14:28.884 fused_ordering(349) 00:14:28.884 fused_ordering(350) 00:14:28.884 fused_ordering(351) 00:14:28.884 fused_ordering(352) 00:14:28.884 fused_ordering(353) 00:14:28.884 fused_ordering(354) 00:14:28.884 fused_ordering(355) 00:14:28.884 fused_ordering(356) 00:14:28.884 fused_ordering(357) 00:14:28.884 fused_ordering(358) 00:14:28.884 fused_ordering(359) 00:14:28.884 fused_ordering(360) 00:14:28.884 fused_ordering(361) 00:14:28.884 fused_ordering(362) 00:14:28.884 fused_ordering(363) 00:14:28.884 fused_ordering(364) 00:14:28.884 fused_ordering(365) 00:14:28.884 fused_ordering(366) 00:14:28.884 fused_ordering(367) 00:14:28.884 fused_ordering(368) 00:14:28.884 fused_ordering(369) 00:14:28.884 fused_ordering(370) 00:14:28.884 fused_ordering(371) 00:14:28.884 fused_ordering(372) 00:14:28.884 fused_ordering(373) 00:14:28.884 fused_ordering(374) 00:14:28.884 fused_ordering(375) 00:14:28.884 fused_ordering(376) 00:14:28.884 fused_ordering(377) 00:14:28.884 fused_ordering(378) 00:14:28.884 fused_ordering(379) 00:14:28.884 fused_ordering(380) 00:14:28.884 fused_ordering(381) 00:14:28.884 fused_ordering(382) 00:14:28.884 fused_ordering(383) 00:14:28.884 fused_ordering(384) 00:14:28.884 fused_ordering(385) 00:14:28.884 fused_ordering(386) 00:14:28.884 fused_ordering(387) 00:14:28.884 fused_ordering(388) 00:14:28.884 fused_ordering(389) 00:14:28.884 fused_ordering(390) 00:14:28.884 fused_ordering(391) 00:14:28.884 fused_ordering(392) 00:14:28.884 fused_ordering(393) 00:14:28.884 fused_ordering(394) 00:14:28.884 fused_ordering(395) 00:14:28.884 fused_ordering(396) 00:14:28.884 fused_ordering(397) 00:14:28.884 fused_ordering(398) 00:14:28.884 fused_ordering(399) 00:14:28.884 fused_ordering(400) 00:14:28.884 fused_ordering(401) 00:14:28.884 fused_ordering(402) 00:14:28.884 fused_ordering(403) 00:14:28.884 fused_ordering(404) 00:14:28.884 fused_ordering(405) 00:14:28.884 fused_ordering(406) 00:14:28.884 fused_ordering(407) 00:14:28.884 fused_ordering(408) 00:14:28.884 fused_ordering(409) 00:14:28.884 fused_ordering(410) 00:14:29.448 fused_ordering(411) 00:14:29.448 fused_ordering(412) 00:14:29.448 fused_ordering(413) 00:14:29.448 fused_ordering(414) 00:14:29.448 fused_ordering(415) 00:14:29.448 fused_ordering(416) 00:14:29.448 fused_ordering(417) 00:14:29.448 fused_ordering(418) 00:14:29.448 fused_ordering(419) 00:14:29.448 fused_ordering(420) 00:14:29.448 fused_ordering(421) 00:14:29.449 fused_ordering(422) 00:14:29.449 fused_ordering(423) 00:14:29.449 fused_ordering(424) 00:14:29.449 fused_ordering(425) 00:14:29.449 fused_ordering(426) 00:14:29.449 fused_ordering(427) 00:14:29.449 fused_ordering(428) 00:14:29.449 fused_ordering(429) 00:14:29.449 fused_ordering(430) 00:14:29.449 fused_ordering(431) 00:14:29.449 fused_ordering(432) 00:14:29.449 fused_ordering(433) 00:14:29.449 fused_ordering(434) 00:14:29.449 fused_ordering(435) 00:14:29.449 fused_ordering(436) 00:14:29.449 fused_ordering(437) 00:14:29.449 fused_ordering(438) 00:14:29.449 fused_ordering(439) 00:14:29.449 fused_ordering(440) 00:14:29.449 fused_ordering(441) 00:14:29.449 fused_ordering(442) 00:14:29.449 fused_ordering(443) 00:14:29.449 fused_ordering(444) 00:14:29.449 fused_ordering(445) 00:14:29.449 fused_ordering(446) 00:14:29.449 fused_ordering(447) 00:14:29.449 fused_ordering(448) 00:14:29.449 fused_ordering(449) 00:14:29.449 fused_ordering(450) 00:14:29.449 fused_ordering(451) 00:14:29.449 fused_ordering(452) 00:14:29.449 fused_ordering(453) 00:14:29.449 fused_ordering(454) 00:14:29.449 fused_ordering(455) 00:14:29.449 fused_ordering(456) 00:14:29.449 fused_ordering(457) 00:14:29.449 fused_ordering(458) 00:14:29.449 fused_ordering(459) 00:14:29.449 fused_ordering(460) 00:14:29.449 fused_ordering(461) 00:14:29.449 fused_ordering(462) 00:14:29.449 fused_ordering(463) 00:14:29.449 fused_ordering(464) 00:14:29.449 fused_ordering(465) 00:14:29.449 fused_ordering(466) 00:14:29.449 fused_ordering(467) 00:14:29.449 fused_ordering(468) 00:14:29.449 fused_ordering(469) 00:14:29.449 fused_ordering(470) 00:14:29.449 fused_ordering(471) 00:14:29.449 fused_ordering(472) 00:14:29.449 fused_ordering(473) 00:14:29.449 fused_ordering(474) 00:14:29.449 fused_ordering(475) 00:14:29.449 fused_ordering(476) 00:14:29.449 fused_ordering(477) 00:14:29.449 fused_ordering(478) 00:14:29.449 fused_ordering(479) 00:14:29.449 fused_ordering(480) 00:14:29.449 fused_ordering(481) 00:14:29.449 fused_ordering(482) 00:14:29.449 fused_ordering(483) 00:14:29.449 fused_ordering(484) 00:14:29.449 fused_ordering(485) 00:14:29.449 fused_ordering(486) 00:14:29.449 fused_ordering(487) 00:14:29.449 fused_ordering(488) 00:14:29.449 fused_ordering(489) 00:14:29.449 fused_ordering(490) 00:14:29.449 fused_ordering(491) 00:14:29.449 fused_ordering(492) 00:14:29.449 fused_ordering(493) 00:14:29.449 fused_ordering(494) 00:14:29.449 fused_ordering(495) 00:14:29.449 fused_ordering(496) 00:14:29.449 fused_ordering(497) 00:14:29.449 fused_ordering(498) 00:14:29.449 fused_ordering(499) 00:14:29.449 fused_ordering(500) 00:14:29.449 fused_ordering(501) 00:14:29.449 fused_ordering(502) 00:14:29.449 fused_ordering(503) 00:14:29.449 fused_ordering(504) 00:14:29.449 fused_ordering(505) 00:14:29.449 fused_ordering(506) 00:14:29.449 fused_ordering(507) 00:14:29.449 fused_ordering(508) 00:14:29.449 fused_ordering(509) 00:14:29.449 fused_ordering(510) 00:14:29.449 fused_ordering(511) 00:14:29.449 fused_ordering(512) 00:14:29.449 fused_ordering(513) 00:14:29.449 fused_ordering(514) 00:14:29.449 fused_ordering(515) 00:14:29.449 fused_ordering(516) 00:14:29.449 fused_ordering(517) 00:14:29.449 fused_ordering(518) 00:14:29.449 fused_ordering(519) 00:14:29.449 fused_ordering(520) 00:14:29.449 fused_ordering(521) 00:14:29.449 fused_ordering(522) 00:14:29.449 fused_ordering(523) 00:14:29.449 fused_ordering(524) 00:14:29.449 fused_ordering(525) 00:14:29.449 fused_ordering(526) 00:14:29.449 fused_ordering(527) 00:14:29.449 fused_ordering(528) 00:14:29.449 fused_ordering(529) 00:14:29.449 fused_ordering(530) 00:14:29.449 fused_ordering(531) 00:14:29.449 fused_ordering(532) 00:14:29.449 fused_ordering(533) 00:14:29.449 fused_ordering(534) 00:14:29.449 fused_ordering(535) 00:14:29.449 fused_ordering(536) 00:14:29.449 fused_ordering(537) 00:14:29.449 fused_ordering(538) 00:14:29.449 fused_ordering(539) 00:14:29.449 fused_ordering(540) 00:14:29.449 fused_ordering(541) 00:14:29.449 fused_ordering(542) 00:14:29.449 fused_ordering(543) 00:14:29.449 fused_ordering(544) 00:14:29.449 fused_ordering(545) 00:14:29.449 fused_ordering(546) 00:14:29.449 fused_ordering(547) 00:14:29.449 fused_ordering(548) 00:14:29.449 fused_ordering(549) 00:14:29.449 fused_ordering(550) 00:14:29.449 fused_ordering(551) 00:14:29.449 fused_ordering(552) 00:14:29.449 fused_ordering(553) 00:14:29.449 fused_ordering(554) 00:14:29.449 fused_ordering(555) 00:14:29.449 fused_ordering(556) 00:14:29.449 fused_ordering(557) 00:14:29.449 fused_ordering(558) 00:14:29.449 fused_ordering(559) 00:14:29.449 fused_ordering(560) 00:14:29.449 fused_ordering(561) 00:14:29.449 fused_ordering(562) 00:14:29.449 fused_ordering(563) 00:14:29.449 fused_ordering(564) 00:14:29.449 fused_ordering(565) 00:14:29.449 fused_ordering(566) 00:14:29.449 fused_ordering(567) 00:14:29.449 fused_ordering(568) 00:14:29.449 fused_ordering(569) 00:14:29.449 fused_ordering(570) 00:14:29.449 fused_ordering(571) 00:14:29.449 fused_ordering(572) 00:14:29.449 fused_ordering(573) 00:14:29.449 fused_ordering(574) 00:14:29.449 fused_ordering(575) 00:14:29.449 fused_ordering(576) 00:14:29.449 fused_ordering(577) 00:14:29.449 fused_ordering(578) 00:14:29.449 fused_ordering(579) 00:14:29.449 fused_ordering(580) 00:14:29.449 fused_ordering(581) 00:14:29.449 fused_ordering(582) 00:14:29.449 fused_ordering(583) 00:14:29.449 fused_ordering(584) 00:14:29.449 fused_ordering(585) 00:14:29.449 fused_ordering(586) 00:14:29.449 fused_ordering(587) 00:14:29.449 fused_ordering(588) 00:14:29.449 fused_ordering(589) 00:14:29.449 fused_ordering(590) 00:14:29.449 fused_ordering(591) 00:14:29.449 fused_ordering(592) 00:14:29.449 fused_ordering(593) 00:14:29.449 fused_ordering(594) 00:14:29.449 fused_ordering(595) 00:14:29.449 fused_ordering(596) 00:14:29.449 fused_ordering(597) 00:14:29.449 fused_ordering(598) 00:14:29.449 fused_ordering(599) 00:14:29.449 fused_ordering(600) 00:14:29.449 fused_ordering(601) 00:14:29.449 fused_ordering(602) 00:14:29.449 fused_ordering(603) 00:14:29.449 fused_ordering(604) 00:14:29.449 fused_ordering(605) 00:14:29.449 fused_ordering(606) 00:14:29.449 fused_ordering(607) 00:14:29.449 fused_ordering(608) 00:14:29.449 fused_ordering(609) 00:14:29.449 fused_ordering(610) 00:14:29.449 fused_ordering(611) 00:14:29.449 fused_ordering(612) 00:14:29.449 fused_ordering(613) 00:14:29.449 fused_ordering(614) 00:14:29.449 fused_ordering(615) 00:14:30.381 fused_ordering(616) 00:14:30.381 fused_ordering(617) 00:14:30.381 fused_ordering(618) 00:14:30.381 fused_ordering(619) 00:14:30.381 fused_ordering(620) 00:14:30.381 fused_ordering(621) 00:14:30.381 fused_ordering(622) 00:14:30.381 fused_ordering(623) 00:14:30.381 fused_ordering(624) 00:14:30.381 fused_ordering(625) 00:14:30.381 fused_ordering(626) 00:14:30.381 fused_ordering(627) 00:14:30.381 fused_ordering(628) 00:14:30.381 fused_ordering(629) 00:14:30.381 fused_ordering(630) 00:14:30.381 fused_ordering(631) 00:14:30.381 fused_ordering(632) 00:14:30.381 fused_ordering(633) 00:14:30.381 fused_ordering(634) 00:14:30.381 fused_ordering(635) 00:14:30.381 fused_ordering(636) 00:14:30.381 fused_ordering(637) 00:14:30.381 fused_ordering(638) 00:14:30.381 fused_ordering(639) 00:14:30.381 fused_ordering(640) 00:14:30.381 fused_ordering(641) 00:14:30.381 fused_ordering(642) 00:14:30.381 fused_ordering(643) 00:14:30.381 fused_ordering(644) 00:14:30.381 fused_ordering(645) 00:14:30.381 fused_ordering(646) 00:14:30.381 fused_ordering(647) 00:14:30.381 fused_ordering(648) 00:14:30.381 fused_ordering(649) 00:14:30.381 fused_ordering(650) 00:14:30.381 fused_ordering(651) 00:14:30.381 fused_ordering(652) 00:14:30.381 fused_ordering(653) 00:14:30.381 fused_ordering(654) 00:14:30.381 fused_ordering(655) 00:14:30.381 fused_ordering(656) 00:14:30.381 fused_ordering(657) 00:14:30.381 fused_ordering(658) 00:14:30.381 fused_ordering(659) 00:14:30.381 fused_ordering(660) 00:14:30.381 fused_ordering(661) 00:14:30.381 fused_ordering(662) 00:14:30.381 fused_ordering(663) 00:14:30.381 fused_ordering(664) 00:14:30.381 fused_ordering(665) 00:14:30.381 fused_ordering(666) 00:14:30.381 fused_ordering(667) 00:14:30.381 fused_ordering(668) 00:14:30.381 fused_ordering(669) 00:14:30.381 fused_ordering(670) 00:14:30.381 fused_ordering(671) 00:14:30.381 fused_ordering(672) 00:14:30.381 fused_ordering(673) 00:14:30.381 fused_ordering(674) 00:14:30.381 fused_ordering(675) 00:14:30.381 fused_ordering(676) 00:14:30.381 fused_ordering(677) 00:14:30.381 fused_ordering(678) 00:14:30.381 fused_ordering(679) 00:14:30.381 fused_ordering(680) 00:14:30.381 fused_ordering(681) 00:14:30.381 fused_ordering(682) 00:14:30.381 fused_ordering(683) 00:14:30.381 fused_ordering(684) 00:14:30.381 fused_ordering(685) 00:14:30.381 fused_ordering(686) 00:14:30.381 fused_ordering(687) 00:14:30.381 fused_ordering(688) 00:14:30.381 fused_ordering(689) 00:14:30.381 fused_ordering(690) 00:14:30.381 fused_ordering(691) 00:14:30.381 fused_ordering(692) 00:14:30.381 fused_ordering(693) 00:14:30.381 fused_ordering(694) 00:14:30.381 fused_ordering(695) 00:14:30.381 fused_ordering(696) 00:14:30.381 fused_ordering(697) 00:14:30.381 fused_ordering(698) 00:14:30.381 fused_ordering(699) 00:14:30.381 fused_ordering(700) 00:14:30.381 fused_ordering(701) 00:14:30.381 fused_ordering(702) 00:14:30.381 fused_ordering(703) 00:14:30.381 fused_ordering(704) 00:14:30.381 fused_ordering(705) 00:14:30.381 fused_ordering(706) 00:14:30.381 fused_ordering(707) 00:14:30.381 fused_ordering(708) 00:14:30.381 fused_ordering(709) 00:14:30.381 fused_ordering(710) 00:14:30.381 fused_ordering(711) 00:14:30.381 fused_ordering(712) 00:14:30.381 fused_ordering(713) 00:14:30.381 fused_ordering(714) 00:14:30.381 fused_ordering(715) 00:14:30.381 fused_ordering(716) 00:14:30.381 fused_ordering(717) 00:14:30.381 fused_ordering(718) 00:14:30.381 fused_ordering(719) 00:14:30.381 fused_ordering(720) 00:14:30.381 fused_ordering(721) 00:14:30.381 fused_ordering(722) 00:14:30.381 fused_ordering(723) 00:14:30.381 fused_ordering(724) 00:14:30.381 fused_ordering(725) 00:14:30.381 fused_ordering(726) 00:14:30.381 fused_ordering(727) 00:14:30.381 fused_ordering(728) 00:14:30.381 fused_ordering(729) 00:14:30.381 fused_ordering(730) 00:14:30.381 fused_ordering(731) 00:14:30.381 fused_ordering(732) 00:14:30.381 fused_ordering(733) 00:14:30.381 fused_ordering(734) 00:14:30.381 fused_ordering(735) 00:14:30.381 fused_ordering(736) 00:14:30.381 fused_ordering(737) 00:14:30.381 fused_ordering(738) 00:14:30.381 fused_ordering(739) 00:14:30.381 fused_ordering(740) 00:14:30.381 fused_ordering(741) 00:14:30.381 fused_ordering(742) 00:14:30.381 fused_ordering(743) 00:14:30.381 fused_ordering(744) 00:14:30.381 fused_ordering(745) 00:14:30.381 fused_ordering(746) 00:14:30.381 fused_ordering(747) 00:14:30.381 fused_ordering(748) 00:14:30.381 fused_ordering(749) 00:14:30.381 fused_ordering(750) 00:14:30.381 fused_ordering(751) 00:14:30.381 fused_ordering(752) 00:14:30.381 fused_ordering(753) 00:14:30.381 fused_ordering(754) 00:14:30.381 fused_ordering(755) 00:14:30.381 fused_ordering(756) 00:14:30.381 fused_ordering(757) 00:14:30.381 fused_ordering(758) 00:14:30.381 fused_ordering(759) 00:14:30.381 fused_ordering(760) 00:14:30.381 fused_ordering(761) 00:14:30.381 fused_ordering(762) 00:14:30.381 fused_ordering(763) 00:14:30.381 fused_ordering(764) 00:14:30.381 fused_ordering(765) 00:14:30.381 fused_ordering(766) 00:14:30.381 fused_ordering(767) 00:14:30.381 fused_ordering(768) 00:14:30.382 fused_ordering(769) 00:14:30.382 fused_ordering(770) 00:14:30.382 fused_ordering(771) 00:14:30.382 fused_ordering(772) 00:14:30.382 fused_ordering(773) 00:14:30.382 fused_ordering(774) 00:14:30.382 fused_ordering(775) 00:14:30.382 fused_ordering(776) 00:14:30.382 fused_ordering(777) 00:14:30.382 fused_ordering(778) 00:14:30.382 fused_ordering(779) 00:14:30.382 fused_ordering(780) 00:14:30.382 fused_ordering(781) 00:14:30.382 fused_ordering(782) 00:14:30.382 fused_ordering(783) 00:14:30.382 fused_ordering(784) 00:14:30.382 fused_ordering(785) 00:14:30.382 fused_ordering(786) 00:14:30.382 fused_ordering(787) 00:14:30.382 fused_ordering(788) 00:14:30.382 fused_ordering(789) 00:14:30.382 fused_ordering(790) 00:14:30.382 fused_ordering(791) 00:14:30.382 fused_ordering(792) 00:14:30.382 fused_ordering(793) 00:14:30.382 fused_ordering(794) 00:14:30.382 fused_ordering(795) 00:14:30.382 fused_ordering(796) 00:14:30.382 fused_ordering(797) 00:14:30.382 fused_ordering(798) 00:14:30.382 fused_ordering(799) 00:14:30.382 fused_ordering(800) 00:14:30.382 fused_ordering(801) 00:14:30.382 fused_ordering(802) 00:14:30.382 fused_ordering(803) 00:14:30.382 fused_ordering(804) 00:14:30.382 fused_ordering(805) 00:14:30.382 fused_ordering(806) 00:14:30.382 fused_ordering(807) 00:14:30.382 fused_ordering(808) 00:14:30.382 fused_ordering(809) 00:14:30.382 fused_ordering(810) 00:14:30.382 fused_ordering(811) 00:14:30.382 fused_ordering(812) 00:14:30.382 fused_ordering(813) 00:14:30.382 fused_ordering(814) 00:14:30.382 fused_ordering(815) 00:14:30.382 fused_ordering(816) 00:14:30.382 fused_ordering(817) 00:14:30.382 fused_ordering(818) 00:14:30.382 fused_ordering(819) 00:14:30.382 fused_ordering(820) 00:14:30.944 fused_ordering(821) 00:14:30.944 fused_ordering(822) 00:14:30.944 fused_ordering(823) 00:14:30.944 fused_ordering(824) 00:14:30.944 fused_ordering(825) 00:14:30.944 fused_ordering(826) 00:14:30.944 fused_ordering(827) 00:14:30.944 fused_ordering(828) 00:14:30.945 fused_ordering(829) 00:14:30.945 fused_ordering(830) 00:14:30.945 fused_ordering(831) 00:14:30.945 fused_ordering(832) 00:14:30.945 fused_ordering(833) 00:14:30.945 fused_ordering(834) 00:14:30.945 fused_ordering(835) 00:14:30.945 fused_ordering(836) 00:14:30.945 fused_ordering(837) 00:14:30.945 fused_ordering(838) 00:14:30.945 fused_ordering(839) 00:14:30.945 fused_ordering(840) 00:14:30.945 fused_ordering(841) 00:14:30.945 fused_ordering(842) 00:14:30.945 fused_ordering(843) 00:14:30.945 fused_ordering(844) 00:14:30.945 fused_ordering(845) 00:14:30.945 fused_ordering(846) 00:14:30.945 fused_ordering(847) 00:14:30.945 fused_ordering(848) 00:14:30.945 fused_ordering(849) 00:14:30.945 fused_ordering(850) 00:14:30.945 fused_ordering(851) 00:14:30.945 fused_ordering(852) 00:14:30.945 fused_ordering(853) 00:14:30.945 fused_ordering(854) 00:14:30.945 fused_ordering(855) 00:14:30.945 fused_ordering(856) 00:14:30.945 fused_ordering(857) 00:14:30.945 fused_ordering(858) 00:14:30.945 fused_ordering(859) 00:14:30.945 fused_ordering(860) 00:14:30.945 fused_ordering(861) 00:14:30.945 fused_ordering(862) 00:14:30.945 fused_ordering(863) 00:14:30.945 fused_ordering(864) 00:14:30.945 fused_ordering(865) 00:14:30.945 fused_ordering(866) 00:14:30.945 fused_ordering(867) 00:14:30.945 fused_ordering(868) 00:14:30.945 fused_ordering(869) 00:14:30.945 fused_ordering(870) 00:14:30.945 fused_ordering(871) 00:14:30.945 fused_ordering(872) 00:14:30.945 fused_ordering(873) 00:14:30.945 fused_ordering(874) 00:14:30.945 fused_ordering(875) 00:14:30.945 fused_ordering(876) 00:14:30.945 fused_ordering(877) 00:14:30.945 fused_ordering(878) 00:14:30.945 fused_ordering(879) 00:14:30.945 fused_ordering(880) 00:14:30.945 fused_ordering(881) 00:14:30.945 fused_ordering(882) 00:14:30.945 fused_ordering(883) 00:14:30.945 fused_ordering(884) 00:14:30.945 fused_ordering(885) 00:14:30.945 fused_ordering(886) 00:14:30.945 fused_ordering(887) 00:14:30.945 fused_ordering(888) 00:14:30.945 fused_ordering(889) 00:14:30.945 fused_ordering(890) 00:14:30.945 fused_ordering(891) 00:14:30.945 fused_ordering(892) 00:14:30.945 fused_ordering(893) 00:14:30.945 fused_ordering(894) 00:14:30.945 fused_ordering(895) 00:14:30.945 fused_ordering(896) 00:14:30.945 fused_ordering(897) 00:14:30.945 fused_ordering(898) 00:14:30.945 fused_ordering(899) 00:14:30.945 fused_ordering(900) 00:14:30.945 fused_ordering(901) 00:14:30.945 fused_ordering(902) 00:14:30.945 fused_ordering(903) 00:14:30.945 fused_ordering(904) 00:14:30.945 fused_ordering(905) 00:14:30.945 fused_ordering(906) 00:14:30.945 fused_ordering(907) 00:14:30.945 fused_ordering(908) 00:14:30.945 fused_ordering(909) 00:14:30.945 fused_ordering(910) 00:14:30.945 fused_ordering(911) 00:14:30.945 fused_ordering(912) 00:14:30.945 fused_ordering(913) 00:14:30.945 fused_ordering(914) 00:14:30.945 fused_ordering(915) 00:14:30.945 fused_ordering(916) 00:14:30.945 fused_ordering(917) 00:14:30.945 fused_ordering(918) 00:14:30.945 fused_ordering(919) 00:14:30.945 fused_ordering(920) 00:14:30.945 fused_ordering(921) 00:14:30.945 fused_ordering(922) 00:14:30.945 fused_ordering(923) 00:14:30.945 fused_ordering(924) 00:14:30.945 fused_ordering(925) 00:14:30.945 fused_ordering(926) 00:14:30.945 fused_ordering(927) 00:14:30.945 fused_ordering(928) 00:14:30.945 fused_ordering(929) 00:14:30.945 fused_ordering(930) 00:14:30.945 fused_ordering(931) 00:14:30.945 fused_ordering(932) 00:14:30.945 fused_ordering(933) 00:14:30.945 fused_ordering(934) 00:14:30.945 fused_ordering(935) 00:14:30.945 fused_ordering(936) 00:14:30.945 fused_ordering(937) 00:14:30.945 fused_ordering(938) 00:14:30.945 fused_ordering(939) 00:14:30.945 fused_ordering(940) 00:14:30.945 fused_ordering(941) 00:14:30.945 fused_ordering(942) 00:14:30.945 fused_ordering(943) 00:14:30.945 fused_ordering(944) 00:14:30.945 fused_ordering(945) 00:14:30.945 fused_ordering(946) 00:14:30.945 fused_ordering(947) 00:14:30.945 fused_ordering(948) 00:14:30.945 fused_ordering(949) 00:14:30.945 fused_ordering(950) 00:14:30.945 fused_ordering(951) 00:14:30.945 fused_ordering(952) 00:14:30.945 fused_ordering(953) 00:14:30.945 fused_ordering(954) 00:14:30.945 fused_ordering(955) 00:14:30.945 fused_ordering(956) 00:14:30.945 fused_ordering(957) 00:14:30.945 fused_ordering(958) 00:14:30.945 fused_ordering(959) 00:14:30.945 fused_ordering(960) 00:14:30.945 fused_ordering(961) 00:14:30.945 fused_ordering(962) 00:14:30.945 fused_ordering(963) 00:14:30.945 fused_ordering(964) 00:14:30.945 fused_ordering(965) 00:14:30.945 fused_ordering(966) 00:14:30.945 fused_ordering(967) 00:14:30.945 fused_ordering(968) 00:14:30.945 fused_ordering(969) 00:14:30.945 fused_ordering(970) 00:14:30.945 fused_ordering(971) 00:14:30.945 fused_ordering(972) 00:14:30.945 fused_ordering(973) 00:14:30.945 fused_ordering(974) 00:14:30.945 fused_ordering(975) 00:14:30.945 fused_ordering(976) 00:14:30.945 fused_ordering(977) 00:14:30.945 fused_ordering(978) 00:14:30.945 fused_ordering(979) 00:14:30.945 fused_ordering(980) 00:14:30.945 fused_ordering(981) 00:14:30.945 fused_ordering(982) 00:14:30.945 fused_ordering(983) 00:14:30.945 fused_ordering(984) 00:14:30.945 fused_ordering(985) 00:14:30.945 fused_ordering(986) 00:14:30.945 fused_ordering(987) 00:14:30.945 fused_ordering(988) 00:14:30.945 fused_ordering(989) 00:14:30.945 fused_ordering(990) 00:14:30.945 fused_ordering(991) 00:14:30.945 fused_ordering(992) 00:14:30.945 fused_ordering(993) 00:14:30.945 fused_ordering(994) 00:14:30.945 fused_ordering(995) 00:14:30.945 fused_ordering(996) 00:14:30.945 fused_ordering(997) 00:14:30.945 fused_ordering(998) 00:14:30.945 fused_ordering(999) 00:14:30.945 fused_ordering(1000) 00:14:30.945 fused_ordering(1001) 00:14:30.945 fused_ordering(1002) 00:14:30.945 fused_ordering(1003) 00:14:30.945 fused_ordering(1004) 00:14:30.945 fused_ordering(1005) 00:14:30.945 fused_ordering(1006) 00:14:30.945 fused_ordering(1007) 00:14:30.945 fused_ordering(1008) 00:14:30.945 fused_ordering(1009) 00:14:30.945 fused_ordering(1010) 00:14:30.945 fused_ordering(1011) 00:14:30.945 fused_ordering(1012) 00:14:30.945 fused_ordering(1013) 00:14:30.945 fused_ordering(1014) 00:14:30.945 fused_ordering(1015) 00:14:30.945 fused_ordering(1016) 00:14:30.945 fused_ordering(1017) 00:14:30.945 fused_ordering(1018) 00:14:30.945 fused_ordering(1019) 00:14:30.945 fused_ordering(1020) 00:14:30.945 fused_ordering(1021) 00:14:30.945 fused_ordering(1022) 00:14:30.945 fused_ordering(1023) 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.203 rmmod nvme_tcp 00:14:31.203 rmmod nvme_fabrics 00:14:31.203 rmmod nvme_keyring 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 390123 ']' 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 390123 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 390123 ']' 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 390123 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 390123 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 390123' 00:14:31.203 killing process with pid 390123 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 390123 00:14:31.203 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 390123 00:14:31.463 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.463 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.463 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.463 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.463 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.463 03:13:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.463 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.463 03:13:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.367 03:13:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.367 00:14:33.367 real 0m8.334s 00:14:33.367 user 0m5.933s 00:14:33.367 sys 0m3.988s 00:14:33.367 03:13:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:33.367 03:13:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:33.367 ************************************ 00:14:33.367 END TEST nvmf_fused_ordering 00:14:33.367 ************************************ 00:14:33.367 03:13:59 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:33.367 03:13:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:33.367 03:13:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:33.367 03:13:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.367 ************************************ 00:14:33.367 START TEST nvmf_delete_subsystem 00:14:33.367 ************************************ 00:14:33.367 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:33.626 * Looking for test storage... 00:14:33.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.626 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.627 03:13:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:35.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:35.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:35.529 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:35.529 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.529 03:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:35.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:14:35.529 00:14:35.529 --- 10.0.0.2 ping statistics --- 00:14:35.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.529 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:14:35.529 00:14:35.529 --- 10.0.0.1 ping statistics --- 00:14:35.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.529 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:35.529 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=392584 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 392584 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 392584 ']' 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:35.530 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.530 [2024-07-23 03:14:02.090282] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:35.530 [2024-07-23 03:14:02.090380] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.788 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.788 [2024-07-23 03:14:02.158368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:35.788 [2024-07-23 03:14:02.249261] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.788 [2024-07-23 03:14:02.249318] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.788 [2024-07-23 03:14:02.249346] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.788 [2024-07-23 03:14:02.249358] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.788 [2024-07-23 03:14:02.249368] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.788 [2024-07-23 03:14:02.249450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.788 [2024-07-23 03:14:02.249455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.046 [2024-07-23 03:14:02.398845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.046 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.047 [2024-07-23 03:14:02.415089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.047 NULL1 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.047 Delay0 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=392609 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:36.047 03:14:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:36.047 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.047 [2024-07-23 03:14:02.489791] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:37.944 03:14:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.944 03:14:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.944 03:14:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 starting I/O failed: -6 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 [2024-07-23 03:14:04.741211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7b00 is same with the state(5) to be set 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Read completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.201 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 starting I/O failed: -6 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 [2024-07-23 03:14:04.742506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1c400c470 is same with the state(5) to be set 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Read completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:38.202 Write completed with error (sct=0, sc=8) 00:14:39.572 [2024-07-23 03:14:05.713410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4620 is same with the state(5) to be set 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 [2024-07-23 03:14:05.743762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1c400bfe0 is same with the state(5) to be set 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 [2024-07-23 03:14:05.743911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1c400c780 is same with the state(5) to be set 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 [2024-07-23 03:14:05.744640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccd40 is same with the state(5) to be set 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Write completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.572 Read completed with error (sct=0, sc=8) 00:14:39.573 Write completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Write completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Write completed with error (sct=0, sc=8) 00:14:39.573 Write completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Read completed with error (sct=0, sc=8) 00:14:39.573 Write completed with error (sct=0, sc=8) 00:14:39.573 [2024-07-23 03:14:05.745855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7ce0 is same with the state(5) to be set 00:14:39.573 Initializing NVMe Controllers 00:14:39.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.573 Controller IO queue size 128, less than required. 00:14:39.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:39.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:39.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:39.573 Initialization complete. Launching workers. 00:14:39.573 ======================================================== 00:14:39.573 Latency(us) 00:14:39.573 Device Information : IOPS MiB/s Average min max 00:14:39.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.82 0.08 904382.42 465.99 1012130.60 00:14:39.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.38 0.08 937071.47 313.37 2000739.54 00:14:39.573 ======================================================== 00:14:39.573 Total : 325.21 0.16 920302.74 313.37 2000739.54 00:14:39.573 00:14:39.573 [2024-07-23 03:14:05.746363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e4620 (9): Bad file descriptor 00:14:39.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:39.573 03:14:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.573 03:14:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:39.573 03:14:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 392609 00:14:39.573 03:14:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 392609 00:14:39.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (392609) - No such process 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 392609 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 392609 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 392609 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.830 [2024-07-23 03:14:06.269264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=393017 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 393017 00:14:39.830 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:39.830 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.830 [2024-07-23 03:14:06.332544] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:40.394 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.394 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 393017 00:14:40.394 03:14:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.959 03:14:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.959 03:14:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 393017 00:14:40.959 03:14:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.216 03:14:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.216 03:14:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 393017 00:14:41.216 03:14:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.780 03:14:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.780 03:14:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 393017 00:14:41.780 03:14:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.343 03:14:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.343 03:14:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 393017 00:14:42.343 03:14:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.950 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.950 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 393017 00:14:42.950 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.950 Initializing NVMe Controllers 00:14:42.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.950 Controller IO queue size 128, less than required. 00:14:42.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:42.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:42.950 Initialization complete. Launching workers. 00:14:42.950 ======================================================== 00:14:42.950 Latency(us) 00:14:42.950 Device Information : IOPS MiB/s Average min max 00:14:42.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004439.31 1000261.79 1013195.53 00:14:42.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004336.95 1000265.87 1012727.96 00:14:42.950 ======================================================== 00:14:42.950 Total : 256.00 0.12 1004388.13 1000261.79 1013195.53 00:14:42.950 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 393017 00:14:43.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (393017) - No such process 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 393017 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.515 rmmod nvme_tcp 00:14:43.515 rmmod nvme_fabrics 00:14:43.515 rmmod nvme_keyring 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 392584 ']' 00:14:43.515 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 392584 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 392584 ']' 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 392584 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 392584 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 392584' 00:14:43.516 killing process with pid 392584 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 392584 00:14:43.516 03:14:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 392584 00:14:43.775 03:14:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.776 03:14:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.776 03:14:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.776 03:14:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.776 03:14:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.776 03:14:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.776 03:14:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.776 03:14:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.679 03:14:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.679 00:14:45.679 real 0m12.220s 00:14:45.679 user 0m27.879s 00:14:45.679 sys 0m2.931s 00:14:45.679 03:14:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.679 03:14:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.679 ************************************ 00:14:45.679 END TEST nvmf_delete_subsystem 00:14:45.679 ************************************ 00:14:45.679 03:14:12 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:45.679 03:14:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:45.679 03:14:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.679 03:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:45.679 ************************************ 00:14:45.679 START TEST nvmf_ns_masking 00:14:45.679 ************************************ 00:14:45.679 03:14:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:45.679 * Looking for test storage... 00:14:45.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.679 03:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.679 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.938 03:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=d43fb8fe-d1b4-4398-97b4-8eaabdb37423 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:45.939 03:14:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:47.838 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:47.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:47.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:47.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:47.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.839 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:47.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:14:47.839 00:14:47.839 --- 10.0.0.2 ping statistics --- 00:14:47.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.839 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:48.098 00:14:48.098 --- 10.0.0.1 ping statistics --- 00:14:48.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.098 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=395473 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 395473 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 395473 ']' 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:48.098 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.098 [2024-07-23 03:14:14.495842] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:48.098 [2024-07-23 03:14:14.495933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.098 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.098 [2024-07-23 03:14:14.561712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.098 [2024-07-23 03:14:14.650947] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.098 [2024-07-23 03:14:14.651006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.098 [2024-07-23 03:14:14.651020] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.098 [2024-07-23 03:14:14.651031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.098 [2024-07-23 03:14:14.651040] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.098 [2024-07-23 03:14:14.651125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.098 [2024-07-23 03:14:14.651192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.098 [2024-07-23 03:14:14.651258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.098 [2024-07-23 03:14:14.651260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.356 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:48.356 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:48.356 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.356 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.356 03:14:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.356 03:14:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.356 03:14:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:48.614 [2024-07-23 03:14:15.076267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.614 03:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:48.614 03:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:48.614 03:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:48.872 Malloc1 00:14:48.872 03:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:49.130 Malloc2 00:14:49.130 03:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:49.387 03:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:49.644 03:14:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.901 [2024-07-23 03:14:16.397583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.901 03:14:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:49.901 03:14:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d43fb8fe-d1b4-4398-97b4-8eaabdb37423 -a 10.0.0.2 -s 4420 -i 4 00:14:50.158 03:14:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.158 03:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:50.158 03:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.158 03:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:50.158 03:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:52.056 03:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:52.313 [ 0]:0x1 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1739b099f3004a0e840fb4bd38e2c923 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1739b099f3004a0e840fb4bd38e2c923 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.313 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:52.571 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:52.571 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:52.571 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:52.571 [ 0]:0x1 00:14:52.571 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:52.571 03:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1739b099f3004a0e840fb4bd38e2c923 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1739b099f3004a0e840fb4bd38e2c923 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:52.571 [ 1]:0x2 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=30132beabc5744e1a997651710208be5 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 30132beabc5744e1a997651710208be5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.571 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.828 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:53.085 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:53.085 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d43fb8fe-d1b4-4398-97b4-8eaabdb37423 -a 10.0.0.2 -s 4420 -i 4 00:14:53.343 03:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:53.343 03:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:53.343 03:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.343 03:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:53.343 03:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:53.343 03:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:55.869 [ 0]:0x2 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=30132beabc5744e1a997651710208be5 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 30132beabc5744e1a997651710208be5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.869 03:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:55.869 [ 0]:0x1 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1739b099f3004a0e840fb4bd38e2c923 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1739b099f3004a0e840fb4bd38e2c923 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:55.869 [ 1]:0x2 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=30132beabc5744e1a997651710208be5 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 30132beabc5744e1a997651710208be5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.869 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:56.127 [ 0]:0x2 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.127 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=30132beabc5744e1a997651710208be5 00:14:56.385 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 30132beabc5744e1a997651710208be5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.385 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:56.385 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.385 03:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:56.642 03:14:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:56.642 03:14:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d43fb8fe-d1b4-4398-97b4-8eaabdb37423 -a 10.0.0.2 -s 4420 -i 4 00:14:56.642 03:14:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:56.642 03:14:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:56.642 03:14:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.642 03:14:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:56.642 03:14:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:56.642 03:14:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:59.170 [ 0]:0x1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1739b099f3004a0e840fb4bd38e2c923 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1739b099f3004a0e840fb4bd38e2c923 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:59.170 [ 1]:0x2 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=30132beabc5744e1a997651710208be5 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 30132beabc5744e1a997651710208be5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:59.170 [ 0]:0x2 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=30132beabc5744e1a997651710208be5 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 30132beabc5744e1a997651710208be5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:59.170 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:59.429 [2024-07-23 03:14:25.956711] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:59.429 request: 00:14:59.429 { 00:14:59.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.429 "nsid": 2, 00:14:59.429 "host": "nqn.2016-06.io.spdk:host1", 00:14:59.429 "method": "nvmf_ns_remove_host", 00:14:59.429 "req_id": 1 00:14:59.429 } 00:14:59.429 Got JSON-RPC error response 00:14:59.429 response: 00:14:59.429 { 00:14:59.429 "code": -32602, 00:14:59.429 "message": "Invalid parameters" 00:14:59.429 } 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.429 03:14:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:59.686 [ 0]:0x2 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=30132beabc5744e1a997651710208be5 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 30132beabc5744e1a997651710208be5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.686 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.944 rmmod nvme_tcp 00:14:59.944 rmmod nvme_fabrics 00:14:59.944 rmmod nvme_keyring 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 395473 ']' 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 395473 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 395473 ']' 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 395473 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 395473 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 395473' 00:14:59.944 killing process with pid 395473 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 395473 00:14:59.944 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 395473 00:15:00.201 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.201 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:00.201 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:00.201 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.201 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.201 03:14:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.201 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.201 03:14:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.762 03:14:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.762 00:15:02.762 real 0m16.557s 00:15:02.762 user 0m51.597s 00:15:02.762 sys 0m3.728s 00:15:02.762 03:14:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:02.762 03:14:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.762 ************************************ 00:15:02.762 END TEST nvmf_ns_masking 00:15:02.762 ************************************ 00:15:02.762 03:14:28 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:02.762 03:14:28 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:02.762 03:14:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:02.762 03:14:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:02.762 03:14:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.762 ************************************ 00:15:02.762 START TEST nvmf_nvme_cli 00:15:02.762 ************************************ 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:02.762 * Looking for test storage... 00:15:02.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.762 03:14:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.664 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:04.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:04.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:04.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:04.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:15:04.665 00:15:04.665 --- 10.0.0.2 ping statistics --- 00:15:04.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.665 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:15:04.665 00:15:04.665 --- 10.0.0.1 ping statistics --- 00:15:04.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.665 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=398900 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 398900 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 398900 ']' 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:04.665 03:14:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.665 [2024-07-23 03:14:31.010220] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:04.665 [2024-07-23 03:14:31.010316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.665 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.665 [2024-07-23 03:14:31.088348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.665 [2024-07-23 03:14:31.184872] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.665 [2024-07-23 03:14:31.184929] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.665 [2024-07-23 03:14:31.184946] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.665 [2024-07-23 03:14:31.184960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.665 [2024-07-23 03:14:31.184973] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.665 [2024-07-23 03:14:31.185035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.665 [2024-07-23 03:14:31.185073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.665 [2024-07-23 03:14:31.185187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.665 [2024-07-23 03:14:31.185190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.923 [2024-07-23 03:14:31.336287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.923 Malloc0 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.923 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.924 Malloc1 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.924 [2024-07-23 03:14:31.421987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.924 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:05.181 00:15:05.181 Discovery Log Number of Records 2, Generation counter 2 00:15:05.181 =====Discovery Log Entry 0====== 00:15:05.181 trtype: tcp 00:15:05.181 adrfam: ipv4 00:15:05.181 subtype: current discovery subsystem 00:15:05.181 treq: not required 00:15:05.181 portid: 0 00:15:05.181 trsvcid: 4420 00:15:05.181 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:05.181 traddr: 10.0.0.2 00:15:05.181 eflags: explicit discovery connections, duplicate discovery information 00:15:05.181 sectype: none 00:15:05.181 =====Discovery Log Entry 1====== 00:15:05.181 trtype: tcp 00:15:05.181 adrfam: ipv4 00:15:05.181 subtype: nvme subsystem 00:15:05.181 treq: not required 00:15:05.181 portid: 0 00:15:05.181 trsvcid: 4420 00:15:05.181 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:05.181 traddr: 10.0.0.2 00:15:05.181 eflags: none 00:15:05.181 sectype: none 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:05.181 03:14:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.746 03:14:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:05.746 03:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:05.746 03:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.746 03:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:05.746 03:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:05.746 03:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.641 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:07.898 /dev/nvme0n1 ]] 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.898 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:08.156 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.414 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.414 rmmod nvme_tcp 00:15:08.414 rmmod nvme_fabrics 00:15:08.415 rmmod nvme_keyring 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 398900 ']' 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 398900 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 398900 ']' 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 398900 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 398900 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 398900' 00:15:08.415 killing process with pid 398900 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 398900 00:15:08.415 03:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 398900 00:15:08.673 03:14:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.673 03:14:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.673 03:14:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.673 03:14:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.674 03:14:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.674 03:14:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.674 03:14:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.674 03:14:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.577 03:14:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.577 00:15:10.577 real 0m8.327s 00:15:10.577 user 0m15.925s 00:15:10.577 sys 0m2.196s 00:15:10.577 03:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:10.577 03:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:10.577 ************************************ 00:15:10.577 END TEST nvmf_nvme_cli 00:15:10.577 ************************************ 00:15:10.835 03:14:37 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:10.835 03:14:37 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:10.835 03:14:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:10.835 03:14:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:10.835 03:14:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.835 ************************************ 00:15:10.835 START TEST nvmf_vfio_user 00:15:10.835 ************************************ 00:15:10.835 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:10.835 * Looking for test storage... 00:15:10.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.835 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=399819 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 399819' 00:15:10.836 Process pid: 399819 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 399819 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 399819 ']' 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:10.836 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:10.836 [2024-07-23 03:14:37.301356] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:10.836 [2024-07-23 03:14:37.301431] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.836 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.836 [2024-07-23 03:14:37.363582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.094 [2024-07-23 03:14:37.452114] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.094 [2024-07-23 03:14:37.452173] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.094 [2024-07-23 03:14:37.452211] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.094 [2024-07-23 03:14:37.452222] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.094 [2024-07-23 03:14:37.452233] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.094 [2024-07-23 03:14:37.452320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.094 [2024-07-23 03:14:37.452342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.094 [2024-07-23 03:14:37.452393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.094 [2024-07-23 03:14:37.452395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.094 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:11.094 03:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:11.094 03:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:12.027 03:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:12.284 03:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:12.284 03:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:12.284 03:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.284 03:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:12.284 03:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:12.542 Malloc1 00:15:12.542 03:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:12.800 03:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:13.058 03:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:13.315 03:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.315 03:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:13.315 03:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:13.573 Malloc2 00:15:13.573 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:13.830 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:14.088 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:14.345 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:14.346 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:14.346 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.346 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:14.346 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:14.346 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:14.346 [2024-07-23 03:14:40.868985] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:14.346 [2024-07-23 03:14:40.869026] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400247 ] 00:15:14.346 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.346 [2024-07-23 03:14:40.902531] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:14.346 [2024-07-23 03:14:40.912083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.346 [2024-07-23 03:14:40.912112] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe7da778000 00:15:14.346 [2024-07-23 03:14:40.913074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.346 [2024-07-23 03:14:40.914064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.346 [2024-07-23 03:14:40.915072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.346 [2024-07-23 03:14:40.916076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.346 [2024-07-23 03:14:40.917079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.346 [2024-07-23 03:14:40.918093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.346 [2024-07-23 03:14:40.919095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.346 [2024-07-23 03:14:40.920102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.346 [2024-07-23 03:14:40.921107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.346 [2024-07-23 03:14:40.921127] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe7d952a000 00:15:14.609 [2024-07-23 03:14:40.922330] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.609 [2024-07-23 03:14:40.938146] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:14.609 [2024-07-23 03:14:40.938183] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:14.609 [2024-07-23 03:14:40.941229] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:14.609 [2024-07-23 03:14:40.941290] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:14.609 [2024-07-23 03:14:40.941381] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:14.609 [2024-07-23 03:14:40.941414] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:14.609 [2024-07-23 03:14:40.941425] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:14.609 [2024-07-23 03:14:40.942221] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:14.609 [2024-07-23 03:14:40.942247] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:14.609 [2024-07-23 03:14:40.942261] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:14.609 [2024-07-23 03:14:40.943232] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:14.609 [2024-07-23 03:14:40.943252] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:14.609 [2024-07-23 03:14:40.943266] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:14.609 [2024-07-23 03:14:40.944236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:14.609 [2024-07-23 03:14:40.944255] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:14.609 [2024-07-23 03:14:40.945242] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:14.609 [2024-07-23 03:14:40.945261] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:14.609 [2024-07-23 03:14:40.945270] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:14.610 [2024-07-23 03:14:40.945281] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:14.610 [2024-07-23 03:14:40.945390] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:14.610 [2024-07-23 03:14:40.945398] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:14.610 [2024-07-23 03:14:40.945407] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:14.610 [2024-07-23 03:14:40.946257] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:14.610 [2024-07-23 03:14:40.947259] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:14.610 [2024-07-23 03:14:40.948263] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:14.610 [2024-07-23 03:14:40.949259] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:14.610 [2024-07-23 03:14:40.952629] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:14.610 [2024-07-23 03:14:40.953288] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:14.610 [2024-07-23 03:14:40.953305] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:14.610 [2024-07-23 03:14:40.953313] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953337] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:14.610 [2024-07-23 03:14:40.953356] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953389] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.610 [2024-07-23 03:14:40.953398] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.610 [2024-07-23 03:14:40.953421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.610 [2024-07-23 03:14:40.953498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:14.610 [2024-07-23 03:14:40.953521] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:14.610 [2024-07-23 03:14:40.953531] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:14.610 [2024-07-23 03:14:40.953538] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:14.610 [2024-07-23 03:14:40.953546] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:14.610 [2024-07-23 03:14:40.953553] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:14.610 [2024-07-23 03:14:40.953561] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:14.610 [2024-07-23 03:14:40.953568] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953581] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:14.610 [2024-07-23 03:14:40.953635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:14.610 [2024-07-23 03:14:40.953656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.610 [2024-07-23 03:14:40.953669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.610 [2024-07-23 03:14:40.953681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.610 [2024-07-23 03:14:40.953693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.610 [2024-07-23 03:14:40.953705] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953723] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953738] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:14.610 [2024-07-23 03:14:40.953750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:14.610 [2024-07-23 03:14:40.953761] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:14.610 [2024-07-23 03:14:40.953770] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953781] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953795] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.610 [2024-07-23 03:14:40.953823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:14.610 [2024-07-23 03:14:40.953890] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953906] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.953935] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:14.610 [2024-07-23 03:14:40.953943] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:14.610 [2024-07-23 03:14:40.953952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:14.610 [2024-07-23 03:14:40.953969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:14.610 [2024-07-23 03:14:40.953986] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:14.610 [2024-07-23 03:14:40.954002] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.954017] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.954028] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.610 [2024-07-23 03:14:40.954036] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.610 [2024-07-23 03:14:40.954045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.610 [2024-07-23 03:14:40.954067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:14.610 [2024-07-23 03:14:40.954090] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:14.610 [2024-07-23 03:14:40.954105] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:14.611 [2024-07-23 03:14:40.954120] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.611 [2024-07-23 03:14:40.954128] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.611 [2024-07-23 03:14:40.954137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.611 [2024-07-23 03:14:40.954153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:14.611 [2024-07-23 03:14:40.954168] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:14.611 [2024-07-23 03:14:40.954179] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:14.611 [2024-07-23 03:14:40.954193] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:14.611 [2024-07-23 03:14:40.954204] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:14.611 [2024-07-23 03:14:40.954212] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:14.611 [2024-07-23 03:14:40.954220] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:14.611 [2024-07-23 03:14:40.954228] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:14.611 [2024-07-23 03:14:40.954236] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:14.611 [2024-07-23 03:14:40.954267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:14.611 [2024-07-23 03:14:40.954286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:14.611 [2024-07-23 03:14:40.954304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:14.611 [2024-07-23 03:14:40.954315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:14.611 [2024-07-23 03:14:40.954330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:14.611 [2024-07-23 03:14:40.954344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:14.611 [2024-07-23 03:14:40.954359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.611 [2024-07-23 03:14:40.954370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:14.611 [2024-07-23 03:14:40.954388] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:14.611 [2024-07-23 03:14:40.954397] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:14.611 [2024-07-23 03:14:40.954403] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:14.611 [2024-07-23 03:14:40.954409] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:14.611 [2024-07-23 03:14:40.954418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:14.611 [2024-07-23 03:14:40.954429] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:14.611 [2024-07-23 03:14:40.954441] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:14.611 [2024-07-23 03:14:40.954450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:14.611 [2024-07-23 03:14:40.954461] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:14.611 [2024-07-23 03:14:40.954469] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.611 [2024-07-23 03:14:40.954477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.611 [2024-07-23 03:14:40.954489] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:14.611 [2024-07-23 03:14:40.954497] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:14.611 [2024-07-23 03:14:40.954505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:14.611 [2024-07-23 03:14:40.954516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:14.611 [2024-07-23 03:14:40.954536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:14.611 [2024-07-23 03:14:40.954551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:14.611 [2024-07-23 03:14:40.954565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:14.611 ===================================================== 00:15:14.611 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:14.611 ===================================================== 00:15:14.611 Controller Capabilities/Features 00:15:14.611 ================================ 00:15:14.611 Vendor ID: 4e58 00:15:14.611 Subsystem Vendor ID: 4e58 00:15:14.611 Serial Number: SPDK1 00:15:14.611 Model Number: SPDK bdev Controller 00:15:14.611 Firmware Version: 24.05.1 00:15:14.611 Recommended Arb Burst: 6 00:15:14.611 IEEE OUI Identifier: 8d 6b 50 00:15:14.611 Multi-path I/O 00:15:14.611 May have multiple subsystem ports: Yes 00:15:14.611 May have multiple controllers: Yes 00:15:14.611 Associated with SR-IOV VF: No 00:15:14.611 Max Data Transfer Size: 131072 00:15:14.611 Max Number of Namespaces: 32 00:15:14.611 Max Number of I/O Queues: 127 00:15:14.611 NVMe Specification Version (VS): 1.3 00:15:14.611 NVMe Specification Version (Identify): 1.3 00:15:14.611 Maximum Queue Entries: 256 00:15:14.611 Contiguous Queues Required: Yes 00:15:14.611 Arbitration Mechanisms Supported 00:15:14.611 Weighted Round Robin: Not Supported 00:15:14.611 Vendor Specific: Not Supported 00:15:14.611 Reset Timeout: 15000 ms 00:15:14.611 Doorbell Stride: 4 bytes 00:15:14.611 NVM Subsystem Reset: Not Supported 00:15:14.611 Command Sets Supported 00:15:14.611 NVM Command Set: Supported 00:15:14.611 Boot Partition: Not Supported 00:15:14.611 Memory Page Size Minimum: 4096 bytes 00:15:14.611 Memory Page Size Maximum: 4096 bytes 00:15:14.611 Persistent Memory Region: Not Supported 00:15:14.611 Optional Asynchronous Events Supported 00:15:14.611 Namespace Attribute Notices: Supported 00:15:14.611 Firmware Activation Notices: Not Supported 00:15:14.611 ANA Change Notices: Not Supported 00:15:14.611 PLE Aggregate Log Change Notices: Not Supported 00:15:14.611 LBA Status Info Alert Notices: Not Supported 00:15:14.611 EGE Aggregate Log Change Notices: Not Supported 00:15:14.611 Normal NVM Subsystem Shutdown event: Not Supported 00:15:14.611 Zone Descriptor Change Notices: Not Supported 00:15:14.611 Discovery Log Change Notices: Not Supported 00:15:14.611 Controller Attributes 00:15:14.611 128-bit Host Identifier: Supported 00:15:14.611 Non-Operational Permissive Mode: Not Supported 00:15:14.611 NVM Sets: Not Supported 00:15:14.611 Read Recovery Levels: Not Supported 00:15:14.611 Endurance Groups: Not Supported 00:15:14.611 Predictable Latency Mode: Not Supported 00:15:14.611 Traffic Based Keep ALive: Not Supported 00:15:14.611 Namespace Granularity: Not Supported 00:15:14.611 SQ Associations: Not Supported 00:15:14.611 UUID List: Not Supported 00:15:14.611 Multi-Domain Subsystem: Not Supported 00:15:14.611 Fixed Capacity Management: Not Supported 00:15:14.611 Variable Capacity Management: Not Supported 00:15:14.611 Delete Endurance Group: Not Supported 00:15:14.611 Delete NVM Set: Not Supported 00:15:14.611 Extended LBA Formats Supported: Not Supported 00:15:14.611 Flexible Data Placement Supported: Not Supported 00:15:14.611 00:15:14.611 Controller Memory Buffer Support 00:15:14.611 ================================ 00:15:14.611 Supported: No 00:15:14.611 00:15:14.611 Persistent Memory Region Support 00:15:14.611 ================================ 00:15:14.611 Supported: No 00:15:14.611 00:15:14.611 Admin Command Set Attributes 00:15:14.611 ============================ 00:15:14.611 Security Send/Receive: Not Supported 00:15:14.611 Format NVM: Not Supported 00:15:14.612 Firmware Activate/Download: Not Supported 00:15:14.612 Namespace Management: Not Supported 00:15:14.612 Device Self-Test: Not Supported 00:15:14.612 Directives: Not Supported 00:15:14.612 NVMe-MI: Not Supported 00:15:14.612 Virtualization Management: Not Supported 00:15:14.612 Doorbell Buffer Config: Not Supported 00:15:14.612 Get LBA Status Capability: Not Supported 00:15:14.612 Command & Feature Lockdown Capability: Not Supported 00:15:14.612 Abort Command Limit: 4 00:15:14.612 Async Event Request Limit: 4 00:15:14.612 Number of Firmware Slots: N/A 00:15:14.612 Firmware Slot 1 Read-Only: N/A 00:15:14.612 Firmware Activation Without Reset: N/A 00:15:14.612 Multiple Update Detection Support: N/A 00:15:14.612 Firmware Update Granularity: No Information Provided 00:15:14.612 Per-Namespace SMART Log: No 00:15:14.612 Asymmetric Namespace Access Log Page: Not Supported 00:15:14.612 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:14.612 Command Effects Log Page: Supported 00:15:14.612 Get Log Page Extended Data: Supported 00:15:14.612 Telemetry Log Pages: Not Supported 00:15:14.612 Persistent Event Log Pages: Not Supported 00:15:14.612 Supported Log Pages Log Page: May Support 00:15:14.612 Commands Supported & Effects Log Page: Not Supported 00:15:14.612 Feature Identifiers & Effects Log Page:May Support 00:15:14.612 NVMe-MI Commands & Effects Log Page: May Support 00:15:14.612 Data Area 4 for Telemetry Log: Not Supported 00:15:14.612 Error Log Page Entries Supported: 128 00:15:14.612 Keep Alive: Supported 00:15:14.612 Keep Alive Granularity: 10000 ms 00:15:14.612 00:15:14.612 NVM Command Set Attributes 00:15:14.612 ========================== 00:15:14.612 Submission Queue Entry Size 00:15:14.612 Max: 64 00:15:14.612 Min: 64 00:15:14.612 Completion Queue Entry Size 00:15:14.612 Max: 16 00:15:14.612 Min: 16 00:15:14.612 Number of Namespaces: 32 00:15:14.612 Compare Command: Supported 00:15:14.612 Write Uncorrectable Command: Not Supported 00:15:14.612 Dataset Management Command: Supported 00:15:14.612 Write Zeroes Command: Supported 00:15:14.612 Set Features Save Field: Not Supported 00:15:14.612 Reservations: Not Supported 00:15:14.612 Timestamp: Not Supported 00:15:14.612 Copy: Supported 00:15:14.612 Volatile Write Cache: Present 00:15:14.612 Atomic Write Unit (Normal): 1 00:15:14.612 Atomic Write Unit (PFail): 1 00:15:14.612 Atomic Compare & Write Unit: 1 00:15:14.612 Fused Compare & Write: Supported 00:15:14.612 Scatter-Gather List 00:15:14.612 SGL Command Set: Supported (Dword aligned) 00:15:14.612 SGL Keyed: Not Supported 00:15:14.612 SGL Bit Bucket Descriptor: Not Supported 00:15:14.612 SGL Metadata Pointer: Not Supported 00:15:14.612 Oversized SGL: Not Supported 00:15:14.612 SGL Metadata Address: Not Supported 00:15:14.612 SGL Offset: Not Supported 00:15:14.612 Transport SGL Data Block: Not Supported 00:15:14.612 Replay Protected Memory Block: Not Supported 00:15:14.612 00:15:14.612 Firmware Slot Information 00:15:14.612 ========================= 00:15:14.612 Active slot: 1 00:15:14.612 Slot 1 Firmware Revision: 24.05.1 00:15:14.612 00:15:14.612 00:15:14.612 Commands Supported and Effects 00:15:14.612 ============================== 00:15:14.612 Admin Commands 00:15:14.612 -------------- 00:15:14.612 Get Log Page (02h): Supported 00:15:14.612 Identify (06h): Supported 00:15:14.612 Abort (08h): Supported 00:15:14.612 Set Features (09h): Supported 00:15:14.612 Get Features (0Ah): Supported 00:15:14.612 Asynchronous Event Request (0Ch): Supported 00:15:14.612 Keep Alive (18h): Supported 00:15:14.612 I/O Commands 00:15:14.612 ------------ 00:15:14.612 Flush (00h): Supported LBA-Change 00:15:14.612 Write (01h): Supported LBA-Change 00:15:14.612 Read (02h): Supported 00:15:14.612 Compare (05h): Supported 00:15:14.612 Write Zeroes (08h): Supported LBA-Change 00:15:14.612 Dataset Management (09h): Supported LBA-Change 00:15:14.612 Copy (19h): Supported LBA-Change 00:15:14.612 Unknown (79h): Supported LBA-Change 00:15:14.612 Unknown (7Ah): Supported 00:15:14.612 00:15:14.612 Error Log 00:15:14.612 ========= 00:15:14.612 00:15:14.612 Arbitration 00:15:14.612 =========== 00:15:14.612 Arbitration Burst: 1 00:15:14.612 00:15:14.612 Power Management 00:15:14.612 ================ 00:15:14.612 Number of Power States: 1 00:15:14.612 Current Power State: Power State #0 00:15:14.612 Power State #0: 00:15:14.612 Max Power: 0.00 W 00:15:14.612 Non-Operational State: Operational 00:15:14.612 Entry Latency: Not Reported 00:15:14.612 Exit Latency: Not Reported 00:15:14.612 Relative Read Throughput: 0 00:15:14.612 Relative Read Latency: 0 00:15:14.612 Relative Write Throughput: 0 00:15:14.612 Relative Write Latency: 0 00:15:14.612 Idle Power: Not Reported 00:15:14.612 Active Power: Not Reported 00:15:14.612 Non-Operational Permissive Mode: Not Supported 00:15:14.612 00:15:14.612 Health Information 00:15:14.612 ================== 00:15:14.612 Critical Warnings: 00:15:14.612 Available Spare Space: OK 00:15:14.612 Temperature: OK 00:15:14.612 Device Reliability: OK 00:15:14.612 Read Only: No 00:15:14.612 Volatile Memory Backup: OK 00:15:14.612 Current Temperature: 0 Kelvin[2024-07-23 03:14:40.954711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:14.612 [2024-07-23 03:14:40.954729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:14.612 [2024-07-23 03:14:40.954767] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:14.612 [2024-07-23 03:14:40.954785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.612 [2024-07-23 03:14:40.954797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.612 [2024-07-23 03:14:40.954807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.612 [2024-07-23 03:14:40.954816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.612 [2024-07-23 03:14:40.955303] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:14.612 [2024-07-23 03:14:40.955324] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:14.612 [2024-07-23 03:14:40.956298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:14.612 [2024-07-23 03:14:40.956369] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:14.612 [2024-07-23 03:14:40.956383] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:14.612 [2024-07-23 03:14:40.957311] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:14.612 [2024-07-23 03:14:40.957333] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:14.612 [2024-07-23 03:14:40.957389] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:14.612 [2024-07-23 03:14:40.959349] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.612 (-273 Celsius) 00:15:14.612 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:14.612 Available Spare: 0% 00:15:14.612 Available Spare Threshold: 0% 00:15:14.612 Life Percentage Used: 0% 00:15:14.612 Data Units Read: 0 00:15:14.612 Data Units Written: 0 00:15:14.612 Host Read Commands: 0 00:15:14.612 Host Write Commands: 0 00:15:14.612 Controller Busy Time: 0 minutes 00:15:14.612 Power Cycles: 0 00:15:14.612 Power On Hours: 0 hours 00:15:14.612 Unsafe Shutdowns: 0 00:15:14.612 Unrecoverable Media Errors: 0 00:15:14.612 Lifetime Error Log Entries: 0 00:15:14.612 Warning Temperature Time: 0 minutes 00:15:14.612 Critical Temperature Time: 0 minutes 00:15:14.612 00:15:14.612 Number of Queues 00:15:14.612 ================ 00:15:14.612 Number of I/O Submission Queues: 127 00:15:14.612 Number of I/O Completion Queues: 127 00:15:14.612 00:15:14.612 Active Namespaces 00:15:14.612 ================= 00:15:14.612 Namespace ID:1 00:15:14.612 Error Recovery Timeout: Unlimited 00:15:14.612 Command Set Identifier: NVM (00h) 00:15:14.612 Deallocate: Supported 00:15:14.612 Deallocated/Unwritten Error: Not Supported 00:15:14.612 Deallocated Read Value: Unknown 00:15:14.612 Deallocate in Write Zeroes: Not Supported 00:15:14.612 Deallocated Guard Field: 0xFFFF 00:15:14.612 Flush: Supported 00:15:14.612 Reservation: Supported 00:15:14.612 Namespace Sharing Capabilities: Multiple Controllers 00:15:14.612 Size (in LBAs): 131072 (0GiB) 00:15:14.612 Capacity (in LBAs): 131072 (0GiB) 00:15:14.612 Utilization (in LBAs): 131072 (0GiB) 00:15:14.612 NGUID: C346F5D968CD45C49445E9C7D80BC7D0 00:15:14.612 UUID: c346f5d9-68cd-45c4-9445-e9c7d80bc7d0 00:15:14.612 Thin Provisioning: Not Supported 00:15:14.612 Per-NS Atomic Units: Yes 00:15:14.612 Atomic Boundary Size (Normal): 0 00:15:14.612 Atomic Boundary Size (PFail): 0 00:15:14.612 Atomic Boundary Offset: 0 00:15:14.612 Maximum Single Source Range Length: 65535 00:15:14.612 Maximum Copy Length: 65535 00:15:14.612 Maximum Source Range Count: 1 00:15:14.612 NGUID/EUI64 Never Reused: No 00:15:14.612 Namespace Write Protected: No 00:15:14.612 Number of LBA Formats: 1 00:15:14.612 Current LBA Format: LBA Format #00 00:15:14.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:14.612 00:15:14.612 03:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:14.612 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.872 [2024-07-23 03:14:41.187425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.163 Initializing NVMe Controllers 00:15:20.163 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.163 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:20.163 Initialization complete. Launching workers. 00:15:20.163 ======================================================== 00:15:20.163 Latency(us) 00:15:20.163 Device Information : IOPS MiB/s Average min max 00:15:20.163 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35615.21 139.12 3593.29 1155.42 8578.52 00:15:20.163 ======================================================== 00:15:20.163 Total : 35615.21 139.12 3593.29 1155.42 8578.52 00:15:20.163 00:15:20.163 [2024-07-23 03:14:46.210143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:20.164 03:14:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:20.164 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.164 [2024-07-23 03:14:46.451303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:25.424 Initializing NVMe Controllers 00:15:25.424 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:25.424 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:25.424 Initialization complete. Launching workers. 00:15:25.424 ======================================================== 00:15:25.424 Latency(us) 00:15:25.424 Device Information : IOPS MiB/s Average min max 00:15:25.424 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.17 62.70 7982.87 6944.00 11968.41 00:15:25.424 ======================================================== 00:15:25.424 Total : 16051.17 62.70 7982.87 6944.00 11968.41 00:15:25.424 00:15:25.424 [2024-07-23 03:14:51.486670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:25.424 03:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:25.424 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.424 [2024-07-23 03:14:51.699763] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:30.686 [2024-07-23 03:14:56.779974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:30.686 Initializing NVMe Controllers 00:15:30.686 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:30.686 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:30.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:30.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:30.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:30.686 Initialization complete. Launching workers. 00:15:30.686 Starting thread on core 2 00:15:30.686 Starting thread on core 3 00:15:30.686 Starting thread on core 1 00:15:30.686 03:14:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:30.686 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.686 [2024-07-23 03:14:57.080107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.969 [2024-07-23 03:15:00.146794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.969 Initializing NVMe Controllers 00:15:33.969 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.969 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.969 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:33.969 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:33.969 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:33.969 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:33.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:33.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:33.969 Initialization complete. Launching workers. 00:15:33.969 Starting thread on core 1 with urgent priority queue 00:15:33.969 Starting thread on core 2 with urgent priority queue 00:15:33.969 Starting thread on core 3 with urgent priority queue 00:15:33.969 Starting thread on core 0 with urgent priority queue 00:15:33.969 SPDK bdev Controller (SPDK1 ) core 0: 5793.00 IO/s 17.26 secs/100000 ios 00:15:33.969 SPDK bdev Controller (SPDK1 ) core 1: 5341.33 IO/s 18.72 secs/100000 ios 00:15:33.969 SPDK bdev Controller (SPDK1 ) core 2: 5591.33 IO/s 17.88 secs/100000 ios 00:15:33.969 SPDK bdev Controller (SPDK1 ) core 3: 4660.00 IO/s 21.46 secs/100000 ios 00:15:33.969 ======================================================== 00:15:33.969 00:15:33.969 03:15:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:33.969 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.969 [2024-07-23 03:15:00.445112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.969 Initializing NVMe Controllers 00:15:33.969 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.969 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.969 Namespace ID: 1 size: 0GB 00:15:33.969 Initialization complete. 00:15:33.969 INFO: using host memory buffer for IO 00:15:33.969 Hello world! 00:15:33.969 [2024-07-23 03:15:00.480751] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.969 03:15:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:34.227 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.227 [2024-07-23 03:15:00.771506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.600 Initializing NVMe Controllers 00:15:35.600 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.600 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.600 Initialization complete. Launching workers. 00:15:35.600 submit (in ns) avg, min, max = 7840.3, 3480.0, 4022257.8 00:15:35.600 complete (in ns) avg, min, max = 24994.5, 2058.9, 6991697.8 00:15:35.600 00:15:35.600 Submit histogram 00:15:35.600 ================ 00:15:35.600 Range in us Cumulative Count 00:15:35.600 3.461 - 3.484: 0.0074% ( 1) 00:15:35.600 3.484 - 3.508: 0.4527% ( 60) 00:15:35.600 3.508 - 3.532: 1.9887% ( 207) 00:15:35.600 3.532 - 3.556: 5.1351% ( 424) 00:15:35.601 3.556 - 3.579: 11.5687% ( 867) 00:15:35.601 3.579 - 3.603: 22.5438% ( 1479) 00:15:35.601 3.603 - 3.627: 34.1941% ( 1570) 00:15:35.601 3.627 - 3.650: 42.6536% ( 1140) 00:15:35.601 3.650 - 3.674: 48.2636% ( 756) 00:15:35.601 3.674 - 3.698: 53.8884% ( 758) 00:15:35.601 3.698 - 3.721: 59.4687% ( 752) 00:15:35.601 3.721 - 3.745: 63.8246% ( 587) 00:15:35.601 3.745 - 3.769: 66.6815% ( 385) 00:15:35.601 3.769 - 3.793: 69.1822% ( 337) 00:15:35.601 3.793 - 3.816: 72.2247% ( 410) 00:15:35.601 3.816 - 3.840: 76.1428% ( 528) 00:15:35.601 3.840 - 3.864: 80.1499% ( 540) 00:15:35.601 3.864 - 3.887: 83.6821% ( 476) 00:15:35.601 3.887 - 3.911: 85.9751% ( 309) 00:15:35.601 3.911 - 3.935: 87.8599% ( 254) 00:15:35.601 3.935 - 3.959: 89.4182% ( 210) 00:15:35.601 3.959 - 3.982: 90.8727% ( 196) 00:15:35.601 3.982 - 4.006: 91.8670% ( 134) 00:15:35.601 4.006 - 4.030: 92.6610% ( 107) 00:15:35.601 4.030 - 4.053: 93.4031% ( 100) 00:15:35.601 4.053 - 4.077: 94.0116% ( 82) 00:15:35.601 4.077 - 4.101: 94.6794% ( 90) 00:15:35.601 4.101 - 4.124: 95.1618% ( 65) 00:15:35.601 4.124 - 4.148: 95.4066% ( 33) 00:15:35.601 4.148 - 4.172: 95.5847% ( 24) 00:15:35.601 4.172 - 4.196: 95.7109% ( 17) 00:15:35.601 4.196 - 4.219: 95.7925% ( 11) 00:15:35.601 4.219 - 4.243: 95.9558% ( 22) 00:15:35.601 4.243 - 4.267: 96.0968% ( 19) 00:15:35.601 4.267 - 4.290: 96.1636% ( 9) 00:15:35.601 4.290 - 4.314: 96.2155% ( 7) 00:15:35.601 4.314 - 4.338: 96.2897% ( 10) 00:15:35.601 4.338 - 4.361: 96.3639% ( 10) 00:15:35.601 4.361 - 4.385: 96.4084% ( 6) 00:15:35.601 4.385 - 4.409: 96.4159% ( 1) 00:15:35.601 4.409 - 4.433: 96.4307% ( 2) 00:15:35.601 4.456 - 4.480: 96.4381% ( 1) 00:15:35.601 4.480 - 4.504: 96.4530% ( 2) 00:15:35.601 4.504 - 4.527: 96.4678% ( 2) 00:15:35.601 4.527 - 4.551: 96.4826% ( 2) 00:15:35.601 4.551 - 4.575: 96.4901% ( 1) 00:15:35.601 4.575 - 4.599: 96.5049% ( 2) 00:15:35.601 4.599 - 4.622: 96.5346% ( 4) 00:15:35.601 4.622 - 4.646: 96.5568% ( 3) 00:15:35.601 4.646 - 4.670: 96.6236% ( 9) 00:15:35.601 4.670 - 4.693: 96.6978% ( 10) 00:15:35.601 4.693 - 4.717: 96.7349% ( 5) 00:15:35.601 4.717 - 4.741: 96.7869% ( 7) 00:15:35.601 4.741 - 4.764: 96.8611% ( 10) 00:15:35.601 4.764 - 4.788: 96.8833% ( 3) 00:15:35.601 4.788 - 4.812: 96.9279% ( 6) 00:15:35.601 4.812 - 4.836: 96.9576% ( 4) 00:15:35.601 4.836 - 4.859: 96.9724% ( 2) 00:15:35.601 4.859 - 4.883: 96.9947% ( 3) 00:15:35.601 4.883 - 4.907: 97.0243% ( 4) 00:15:35.601 4.907 - 4.930: 97.0466% ( 3) 00:15:35.601 4.930 - 4.954: 97.0763% ( 4) 00:15:35.601 4.954 - 4.978: 97.0985% ( 3) 00:15:35.601 4.978 - 5.001: 97.1134% ( 2) 00:15:35.601 5.001 - 5.025: 97.1282% ( 2) 00:15:35.601 5.025 - 5.049: 97.1431% ( 2) 00:15:35.601 5.049 - 5.073: 97.1653% ( 3) 00:15:35.601 5.073 - 5.096: 97.1876% ( 3) 00:15:35.601 5.096 - 5.120: 97.2173% ( 4) 00:15:35.601 5.120 - 5.144: 97.2395% ( 3) 00:15:35.601 5.167 - 5.191: 97.2470% ( 1) 00:15:35.601 5.191 - 5.215: 97.2618% ( 2) 00:15:35.601 5.215 - 5.239: 97.2766% ( 2) 00:15:35.601 5.239 - 5.262: 97.2841% ( 1) 00:15:35.601 5.333 - 5.357: 97.2915% ( 1) 00:15:35.601 5.357 - 5.381: 97.2989% ( 1) 00:15:35.601 5.381 - 5.404: 97.3137% ( 2) 00:15:35.601 5.404 - 5.428: 97.3212% ( 1) 00:15:35.601 5.452 - 5.476: 97.3360% ( 2) 00:15:35.601 5.570 - 5.594: 97.3434% ( 1) 00:15:35.601 5.594 - 5.618: 97.3508% ( 1) 00:15:35.601 5.641 - 5.665: 97.3583% ( 1) 00:15:35.601 5.784 - 5.807: 97.3657% ( 1) 00:15:35.601 5.855 - 5.879: 97.3731% ( 1) 00:15:35.601 5.902 - 5.926: 97.3805% ( 1) 00:15:35.601 5.950 - 5.973: 97.3879% ( 1) 00:15:35.601 6.068 - 6.116: 97.4028% ( 2) 00:15:35.601 6.258 - 6.305: 97.4176% ( 2) 00:15:35.601 6.353 - 6.400: 97.4251% ( 1) 00:15:35.601 6.400 - 6.447: 97.4325% ( 1) 00:15:35.601 6.447 - 6.495: 97.4473% ( 2) 00:15:35.601 6.684 - 6.732: 97.4547% ( 1) 00:15:35.601 6.779 - 6.827: 97.4696% ( 2) 00:15:35.601 6.827 - 6.874: 97.4844% ( 2) 00:15:35.601 6.874 - 6.921: 97.4918% ( 1) 00:15:35.601 6.921 - 6.969: 97.4993% ( 1) 00:15:35.601 7.111 - 7.159: 97.5141% ( 2) 00:15:35.601 7.206 - 7.253: 97.5289% ( 2) 00:15:35.601 7.253 - 7.301: 97.5364% ( 1) 00:15:35.601 7.301 - 7.348: 97.5660% ( 4) 00:15:35.601 7.348 - 7.396: 97.6106% ( 6) 00:15:35.601 7.396 - 7.443: 97.6254% ( 2) 00:15:35.601 7.443 - 7.490: 97.6699% ( 6) 00:15:35.601 7.490 - 7.538: 97.6774% ( 1) 00:15:35.601 7.538 - 7.585: 97.7070% ( 4) 00:15:35.601 7.633 - 7.680: 97.7145% ( 1) 00:15:35.601 7.680 - 7.727: 97.7441% ( 4) 00:15:35.601 7.727 - 7.775: 97.7590% ( 2) 00:15:35.601 7.775 - 7.822: 97.7664% ( 1) 00:15:35.601 7.822 - 7.870: 97.7738% ( 1) 00:15:35.601 7.870 - 7.917: 97.7812% ( 1) 00:15:35.601 7.964 - 8.012: 97.7887% ( 1) 00:15:35.601 8.012 - 8.059: 97.7961% ( 1) 00:15:35.601 8.059 - 8.107: 97.8109% ( 2) 00:15:35.601 8.107 - 8.154: 97.8258% ( 2) 00:15:35.601 8.154 - 8.201: 97.8629% ( 5) 00:15:35.601 8.201 - 8.249: 97.8703% ( 1) 00:15:35.601 8.249 - 8.296: 97.8777% ( 1) 00:15:35.601 8.344 - 8.391: 97.8925% ( 2) 00:15:35.601 8.439 - 8.486: 97.9000% ( 1) 00:15:35.601 8.533 - 8.581: 97.9074% ( 1) 00:15:35.601 8.581 - 8.628: 97.9148% ( 1) 00:15:35.601 8.676 - 8.723: 97.9222% ( 1) 00:15:35.601 8.723 - 8.770: 97.9297% ( 1) 00:15:35.601 8.770 - 8.818: 97.9371% ( 1) 00:15:35.601 8.818 - 8.865: 97.9445% ( 1) 00:15:35.601 8.865 - 8.913: 97.9668% ( 3) 00:15:35.601 8.960 - 9.007: 97.9890% ( 3) 00:15:35.601 9.007 - 9.055: 98.0039% ( 2) 00:15:35.601 9.055 - 9.102: 98.0113% ( 1) 00:15:35.601 9.102 - 9.150: 98.0261% ( 2) 00:15:35.601 9.150 - 9.197: 98.0335% ( 1) 00:15:35.601 9.197 - 9.244: 98.0410% ( 1) 00:15:35.601 9.244 - 9.292: 98.0484% ( 1) 00:15:35.601 9.339 - 9.387: 98.0706% ( 3) 00:15:35.601 9.387 - 9.434: 98.0855% ( 2) 00:15:35.601 9.434 - 9.481: 98.0929% ( 1) 00:15:35.601 9.481 - 9.529: 98.1152% ( 3) 00:15:35.601 9.529 - 9.576: 98.1300% ( 2) 00:15:35.601 9.671 - 9.719: 98.1449% ( 2) 00:15:35.601 9.719 - 9.766: 98.1523% ( 1) 00:15:35.601 9.766 - 9.813: 98.1745% ( 3) 00:15:35.601 9.861 - 9.908: 98.1894% ( 2) 00:15:35.601 9.956 - 10.003: 98.1968% ( 1) 00:15:35.601 10.003 - 10.050: 98.2042% ( 1) 00:15:35.601 10.098 - 10.145: 98.2191% ( 2) 00:15:35.601 10.145 - 10.193: 98.2413% ( 3) 00:15:35.601 10.193 - 10.240: 98.2487% ( 1) 00:15:35.601 10.240 - 10.287: 98.2562% ( 1) 00:15:35.601 10.287 - 10.335: 98.2636% ( 1) 00:15:35.601 10.335 - 10.382: 98.2784% ( 2) 00:15:35.601 10.430 - 10.477: 98.2858% ( 1) 00:15:35.601 10.477 - 10.524: 98.3007% ( 2) 00:15:35.601 10.524 - 10.572: 98.3155% ( 2) 00:15:35.601 10.619 - 10.667: 98.3378% ( 3) 00:15:35.601 10.667 - 10.714: 98.3452% ( 1) 00:15:35.601 10.809 - 10.856: 98.3675% ( 3) 00:15:35.601 10.904 - 10.951: 98.3749% ( 1) 00:15:35.601 10.951 - 10.999: 98.3823% ( 1) 00:15:35.601 10.999 - 11.046: 98.3972% ( 2) 00:15:35.601 11.046 - 11.093: 98.4120% ( 2) 00:15:35.601 11.093 - 11.141: 98.4268% ( 2) 00:15:35.601 11.141 - 11.188: 98.4343% ( 1) 00:15:35.601 11.188 - 11.236: 98.4417% ( 1) 00:15:35.601 11.378 - 11.425: 98.4639% ( 3) 00:15:35.601 11.425 - 11.473: 98.4714% ( 1) 00:15:35.601 11.473 - 11.520: 98.4862% ( 2) 00:15:35.601 11.520 - 11.567: 98.4936% ( 1) 00:15:35.601 11.567 - 11.615: 98.5085% ( 2) 00:15:35.601 11.615 - 11.662: 98.5159% ( 1) 00:15:35.601 11.662 - 11.710: 98.5307% ( 2) 00:15:35.601 11.710 - 11.757: 98.5456% ( 2) 00:15:35.601 11.804 - 11.852: 98.5604% ( 2) 00:15:35.601 11.947 - 11.994: 98.5752% ( 2) 00:15:35.601 11.994 - 12.041: 98.5827% ( 1) 00:15:35.601 12.041 - 12.089: 98.5901% ( 1) 00:15:35.601 12.089 - 12.136: 98.5975% ( 1) 00:15:35.601 12.136 - 12.231: 98.6049% ( 1) 00:15:35.601 12.231 - 12.326: 98.6272% ( 3) 00:15:35.601 12.326 - 12.421: 98.6420% ( 2) 00:15:35.601 12.421 - 12.516: 98.6495% ( 1) 00:15:35.601 12.516 - 12.610: 98.6643% ( 2) 00:15:35.601 12.705 - 12.800: 98.6717% ( 1) 00:15:35.601 12.800 - 12.895: 98.6791% ( 1) 00:15:35.601 12.895 - 12.990: 98.6866% ( 1) 00:15:35.601 12.990 - 13.084: 98.6940% ( 1) 00:15:35.601 13.084 - 13.179: 98.7237% ( 4) 00:15:35.601 13.179 - 13.274: 98.7311% ( 1) 00:15:35.601 13.464 - 13.559: 98.7608% ( 4) 00:15:35.602 13.559 - 13.653: 98.7682% ( 1) 00:15:35.602 13.653 - 13.748: 98.7979% ( 4) 00:15:35.602 13.748 - 13.843: 98.8053% ( 1) 00:15:35.602 13.843 - 13.938: 98.8127% ( 1) 00:15:35.602 13.938 - 14.033: 98.8350% ( 3) 00:15:35.602 14.033 - 14.127: 98.8424% ( 1) 00:15:35.602 14.222 - 14.317: 98.8498% ( 1) 00:15:35.602 14.412 - 14.507: 98.8721% ( 3) 00:15:35.602 14.507 - 14.601: 98.8943% ( 3) 00:15:35.602 14.696 - 14.791: 98.9018% ( 1) 00:15:35.602 14.791 - 14.886: 98.9240% ( 3) 00:15:35.602 15.265 - 15.360: 98.9314% ( 1) 00:15:35.602 15.455 - 15.550: 98.9389% ( 1) 00:15:35.602 15.929 - 16.024: 98.9463% ( 1) 00:15:35.602 17.067 - 17.161: 98.9611% ( 2) 00:15:35.602 17.256 - 17.351: 98.9685% ( 1) 00:15:35.602 17.351 - 17.446: 98.9982% ( 4) 00:15:35.602 17.446 - 17.541: 99.0279% ( 4) 00:15:35.602 17.541 - 17.636: 99.0427% ( 2) 00:15:35.602 17.636 - 17.730: 99.0650% ( 3) 00:15:35.602 17.730 - 17.825: 99.0947% ( 4) 00:15:35.602 17.825 - 17.920: 99.1169% ( 3) 00:15:35.602 17.920 - 18.015: 99.1615% ( 6) 00:15:35.602 18.015 - 18.110: 99.2208% ( 8) 00:15:35.602 18.110 - 18.204: 99.2802% ( 8) 00:15:35.602 18.204 - 18.299: 99.3470% ( 9) 00:15:35.602 18.299 - 18.394: 99.3989% ( 7) 00:15:35.602 18.394 - 18.489: 99.4731% ( 10) 00:15:35.602 18.489 - 18.584: 99.5325% ( 8) 00:15:35.602 18.584 - 18.679: 99.5844% ( 7) 00:15:35.602 18.679 - 18.773: 99.6141% ( 4) 00:15:35.602 18.773 - 18.868: 99.6290% ( 2) 00:15:35.602 18.868 - 18.963: 99.6438% ( 2) 00:15:35.602 18.963 - 19.058: 99.6512% ( 1) 00:15:35.602 19.058 - 19.153: 99.6587% ( 1) 00:15:35.602 19.153 - 19.247: 99.6883% ( 4) 00:15:35.602 19.342 - 19.437: 99.6958% ( 1) 00:15:35.602 19.437 - 19.532: 99.7032% ( 1) 00:15:35.602 19.532 - 19.627: 99.7180% ( 2) 00:15:35.602 19.627 - 19.721: 99.7254% ( 1) 00:15:35.602 19.816 - 19.911: 99.7329% ( 1) 00:15:35.602 19.911 - 20.006: 99.7403% ( 1) 00:15:35.602 20.385 - 20.480: 99.7477% ( 1) 00:15:35.602 20.575 - 20.670: 99.7551% ( 1) 00:15:35.602 21.333 - 21.428: 99.7625% ( 1) 00:15:35.602 21.428 - 21.523: 99.7700% ( 1) 00:15:35.602 21.713 - 21.807: 99.7774% ( 1) 00:15:35.602 22.566 - 22.661: 99.7922% ( 2) 00:15:35.602 23.135 - 23.230: 99.7996% ( 1) 00:15:35.602 24.083 - 24.178: 99.8071% ( 1) 00:15:35.602 24.178 - 24.273: 99.8145% ( 1) 00:15:35.602 24.841 - 25.031: 99.8293% ( 2) 00:15:35.602 26.169 - 26.359: 99.8367% ( 1) 00:15:35.602 26.359 - 26.548: 99.8442% ( 1) 00:15:35.602 26.738 - 26.927: 99.8590% ( 2) 00:15:35.602 26.927 - 27.117: 99.8664% ( 1) 00:15:35.602 28.255 - 28.444: 99.8738% ( 1) 00:15:35.602 29.203 - 29.393: 99.8813% ( 1) 00:15:35.602 30.341 - 30.530: 99.8887% ( 1) 00:15:35.602 30.530 - 30.720: 99.8961% ( 1) 00:15:35.602 31.668 - 31.858: 99.9035% ( 1) 00:15:35.602 3980.705 - 4004.978: 99.9703% ( 9) 00:15:35.602 4004.978 - 4029.250: 100.0000% ( 4) 00:15:35.602 00:15:35.602 Complete histogram 00:15:35.602 ================== 00:15:35.602 Range in us Cumulative Count 00:15:35.602 2.050 - 2.062: 0.0668% ( 9) 00:15:35.602 2.062 - 2.074: 24.1689% ( 3248) 00:15:35.602 2.074 - 2.086: 37.6521% ( 1817) 00:15:35.602 2.086 - 2.098: 40.3977% ( 370) 00:15:35.602 2.098 - 2.110: 56.2185% ( 2132) 00:15:35.602 2.110 - 2.121: 60.6560% ( 598) 00:15:35.602 2.121 - 2.133: 63.9730% ( 447) 00:15:35.602 2.133 - 2.145: 74.0279% ( 1355) 00:15:35.602 2.145 - 2.157: 76.5361% ( 338) 00:15:35.602 2.157 - 2.169: 78.4209% ( 254) 00:15:35.602 2.169 - 2.181: 82.6952% ( 576) 00:15:35.602 2.181 - 2.193: 83.8157% ( 151) 00:15:35.602 2.193 - 2.204: 84.8026% ( 133) 00:15:35.602 2.204 - 2.216: 88.1196% ( 447) 00:15:35.602 2.216 - 2.228: 90.4942% ( 320) 00:15:35.602 2.228 - 2.240: 92.0896% ( 215) 00:15:35.602 2.240 - 2.252: 93.7890% ( 229) 00:15:35.602 2.252 - 2.264: 94.2416% ( 61) 00:15:35.602 2.264 - 2.276: 94.5236% ( 38) 00:15:35.602 2.276 - 2.287: 94.8575% ( 45) 00:15:35.602 2.287 - 2.299: 95.4363% ( 78) 00:15:35.602 2.299 - 2.311: 95.8445% ( 55) 00:15:35.602 2.311 - 2.323: 96.0077% ( 22) 00:15:35.602 2.323 - 2.335: 96.0448% ( 5) 00:15:35.602 2.335 - 2.347: 96.1339% ( 12) 00:15:35.602 2.347 - 2.359: 96.3342% ( 27) 00:15:35.602 2.359 - 2.370: 96.6459% ( 42) 00:15:35.602 2.370 - 2.382: 97.0095% ( 49) 00:15:35.602 2.382 - 2.394: 97.4028% ( 53) 00:15:35.602 2.394 - 2.406: 97.6699% ( 36) 00:15:35.602 2.406 - 2.418: 97.8109% ( 19) 00:15:35.602 2.418 - 2.430: 97.8851% ( 10) 00:15:35.602 2.430 - 2.441: 98.0410% ( 21) 00:15:35.602 2.441 - 2.453: 98.1597% ( 16) 00:15:35.602 2.453 - 2.465: 98.2636% ( 14) 00:15:35.602 2.465 - 2.477: 98.3229% ( 8) 00:15:35.602 2.477 - 2.489: 98.3600% ( 5) 00:15:35.602 2.489 - 2.501: 98.3823% ( 3) 00:15:35.602 2.501 - 2.513: 98.3972% ( 2) 00:15:35.602 2.513 - 2.524: 98.4046% ( 1) 00:15:35.602 2.524 - 2.536: 98.4120% ( 1) 00:15:35.602 2.536 - 2.548: 98.4194% ( 1) 00:15:35.602 2.548 - 2.560: 98.4268% ( 1) 00:15:35.602 2.560 - 2.572: 98.4343% ( 1) 00:15:35.602 2.572 - 2.584: 98.4491% ( 2) 00:15:35.602 2.619 - 2.631: 98.4639% ( 2) 00:15:35.602 2.631 - 2.643: 98.4788% ( 2) 00:15:35.602 2.667 - 2.679: 98.4862% ( 1) 00:15:35.602 2.714 - 2.726: 98.4936% ( 1) 00:15:35.602 2.738 - 2.750: 98.5010% ( 1) 00:15:35.602 2.809 - 2.821: 98.5085% ( 1) 00:15:35.602 3.319 - 3.342: 98.5159% ( 1) 00:15:35.602 3.342 - 3.366: 98.5307% ( 2) 00:15:35.602 3.366 - 3.390: 98.5381% ( 1) 00:15:35.602 3.390 - 3.413: 98.5530% ( 2) 00:15:35.602 3.413 - 3.437: 98.5604% ( 1) 00:15:35.602 3.437 - 3.461: 98.5752% ( 2) 00:15:35.602 3.461 - 3.484: 98.5827% ( 1) 00:15:35.602 3.484 - 3.508: 98.6049% ( 3) 00:15:35.602 3.508 - 3.532: 98.6123% ( 1) 00:15:35.602 3.556 - 3.579: 98.6272% ( 2) 00:15:35.602 3.627 - 3.650: 98.6420% ( 2) 00:15:35.602 3.650 - 3.674: 98.6643% ( 3) 00:15:35.602 3.674 - 3.698: 98.6866% ( 3) 00:15:35.602 3.698 - 3.721: 98.6940% ( 1) 00:15:35.602 3.721 - 3.745: 98.7014% ( 1) 00:15:35.602 3.745 - 3.769: 98.7162% ( 2) 00:15:35.602 3.793 - 3.816: 98.7311% ( 2) 00:15:35.602 3.887 - 3.911: 98.7459% ( 2) 00:15:35.602 3.935 - 3.959: 98.7533% ( 1) 00:15:35.602 5.073 - 5.096: 98.7608% ( 1) 00:15:35.602 5.879 - 5.902: 98.7682% ( 1) 00:15:35.602 5.926 - 5.950: 98.7756% ( 1) 00:15:35.602 5.973 - 5.997: 98.7830% ( 1) 00:15:35.602 6.353 - 6.400: 98.7904% ( 1) 00:15:35.602 6.827 - 6.874: 98.7979% ( 1) 00:15:35.602 6.921 - 6.969: 98.8053% ( 1) 00:15:35.602 7.111 - 7.159: 98.8127% ( 1) 00:15:35.602 7.585 - 7.633: 98.8201% ( 1) 00:15:35.602 7.775 - 7.822: 98.8275% ( 1) 00:15:35.602 8.154 - 8.201: 98.8350% ( 1) 00:15:35.602 8.201 - 8.249: 98.8424% ( 1) 00:15:35.602 9.908 - 9.956: 98.8498% ( 1) 00:15:35.602 11.662 - 11.710: 98.8572% ( 1) 00:15:35.602 13.843 - 13.938: 98.8646% ( 1) 00:15:35.602 14.601 - 14.696: 98.8721% ( 1) 00:15:35.602 15.550 - 15.644: 98.8869%[2024-07-23 03:15:01.792784] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.602 ( 2) 00:15:35.602 15.644 - 15.739: 98.9018% ( 2) 00:15:35.602 15.739 - 15.834: 98.9240% ( 3) 00:15:35.602 15.834 - 15.929: 98.9537% ( 4) 00:15:35.602 15.929 - 16.024: 98.9834% ( 4) 00:15:35.602 16.024 - 16.119: 99.0131% ( 4) 00:15:35.602 16.119 - 16.213: 99.0427% ( 4) 00:15:35.602 16.213 - 16.308: 99.0650% ( 3) 00:15:35.602 16.308 - 16.403: 99.0724% ( 1) 00:15:35.602 16.403 - 16.498: 99.0947% ( 3) 00:15:35.602 16.498 - 16.593: 99.1541% ( 8) 00:15:35.602 16.593 - 16.687: 99.1837% ( 4) 00:15:35.602 16.687 - 16.782: 99.2283% ( 6) 00:15:35.602 16.782 - 16.877: 99.2505% ( 3) 00:15:35.602 16.877 - 16.972: 99.2728% ( 3) 00:15:35.602 16.972 - 17.067: 99.3025% ( 4) 00:15:35.602 17.067 - 17.161: 99.3396% ( 5) 00:15:35.602 17.161 - 17.256: 99.3544% ( 2) 00:15:35.602 17.256 - 17.351: 99.3618% ( 1) 00:15:35.602 17.351 - 17.446: 99.3692% ( 1) 00:15:35.602 17.446 - 17.541: 99.3767% ( 1) 00:15:35.602 17.541 - 17.636: 99.3841% ( 1) 00:15:35.602 17.730 - 17.825: 99.3915% ( 1) 00:15:35.602 17.920 - 18.015: 99.3989% ( 1) 00:15:35.602 19.816 - 19.911: 99.4064% ( 1) 00:15:35.602 19.911 - 20.006: 99.4138% ( 1) 00:15:35.602 20.764 - 20.859: 99.4212% ( 1) 00:15:35.602 1049.790 - 1055.858: 99.4286% ( 1) 00:15:35.602 2087.443 - 2099.579: 99.4360% ( 1) 00:15:35.602 2281.624 - 2293.760: 99.4435% ( 1) 00:15:35.602 3009.801 - 3021.938: 99.4509% ( 1) 00:15:35.603 3021.938 - 3034.074: 99.4583% ( 1) 00:15:35.603 3543.799 - 3568.071: 99.4657% ( 1) 00:15:35.603 3980.705 - 4004.978: 99.9184% ( 61) 00:15:35.603 4004.978 - 4029.250: 99.9852% ( 9) 00:15:35.603 4975.881 - 5000.154: 99.9926% ( 1) 00:15:35.603 6990.507 - 7039.052: 100.0000% ( 1) 00:15:35.603 00:15:35.603 03:15:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:35.603 03:15:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:35.603 03:15:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:35.603 03:15:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:35.603 03:15:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.603 [ 00:15:35.603 { 00:15:35.603 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.603 "subtype": "Discovery", 00:15:35.603 "listen_addresses": [], 00:15:35.603 "allow_any_host": true, 00:15:35.603 "hosts": [] 00:15:35.603 }, 00:15:35.603 { 00:15:35.603 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.603 "subtype": "NVMe", 00:15:35.603 "listen_addresses": [ 00:15:35.603 { 00:15:35.603 "trtype": "VFIOUSER", 00:15:35.603 "adrfam": "IPv4", 00:15:35.603 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.603 "trsvcid": "0" 00:15:35.603 } 00:15:35.603 ], 00:15:35.603 "allow_any_host": true, 00:15:35.603 "hosts": [], 00:15:35.603 "serial_number": "SPDK1", 00:15:35.603 "model_number": "SPDK bdev Controller", 00:15:35.603 "max_namespaces": 32, 00:15:35.603 "min_cntlid": 1, 00:15:35.603 "max_cntlid": 65519, 00:15:35.603 "namespaces": [ 00:15:35.603 { 00:15:35.603 "nsid": 1, 00:15:35.603 "bdev_name": "Malloc1", 00:15:35.603 "name": "Malloc1", 00:15:35.603 "nguid": "C346F5D968CD45C49445E9C7D80BC7D0", 00:15:35.603 "uuid": "c346f5d9-68cd-45c4-9445-e9c7d80bc7d0" 00:15:35.603 } 00:15:35.603 ] 00:15:35.603 }, 00:15:35.603 { 00:15:35.603 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.603 "subtype": "NVMe", 00:15:35.603 "listen_addresses": [ 00:15:35.603 { 00:15:35.603 "trtype": "VFIOUSER", 00:15:35.603 "adrfam": "IPv4", 00:15:35.603 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.603 "trsvcid": "0" 00:15:35.603 } 00:15:35.603 ], 00:15:35.603 "allow_any_host": true, 00:15:35.603 "hosts": [], 00:15:35.603 "serial_number": "SPDK2", 00:15:35.603 "model_number": "SPDK bdev Controller", 00:15:35.603 "max_namespaces": 32, 00:15:35.603 "min_cntlid": 1, 00:15:35.603 "max_cntlid": 65519, 00:15:35.603 "namespaces": [ 00:15:35.603 { 00:15:35.603 "nsid": 1, 00:15:35.603 "bdev_name": "Malloc2", 00:15:35.603 "name": "Malloc2", 00:15:35.603 "nguid": "9C15DF385A4B436BB3F837564FE1D4B7", 00:15:35.603 "uuid": "9c15df38-5a4b-436b-b3f8-37564fe1d4b7" 00:15:35.603 } 00:15:35.603 ] 00:15:35.603 } 00:15:35.603 ] 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=402886 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:35.603 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:35.603 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.861 [2024-07-23 03:15:02.268151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.861 Malloc3 00:15:35.861 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:36.119 [2024-07-23 03:15:02.636950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:36.119 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:36.119 Asynchronous Event Request test 00:15:36.119 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.119 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.119 Registering asynchronous event callbacks... 00:15:36.119 Starting namespace attribute notice tests for all controllers... 00:15:36.119 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:36.119 aer_cb - Changed Namespace 00:15:36.119 Cleaning up... 00:15:36.376 [ 00:15:36.376 { 00:15:36.376 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:36.376 "subtype": "Discovery", 00:15:36.376 "listen_addresses": [], 00:15:36.376 "allow_any_host": true, 00:15:36.376 "hosts": [] 00:15:36.376 }, 00:15:36.376 { 00:15:36.376 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:36.376 "subtype": "NVMe", 00:15:36.376 "listen_addresses": [ 00:15:36.376 { 00:15:36.376 "trtype": "VFIOUSER", 00:15:36.376 "adrfam": "IPv4", 00:15:36.376 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:36.376 "trsvcid": "0" 00:15:36.376 } 00:15:36.376 ], 00:15:36.376 "allow_any_host": true, 00:15:36.376 "hosts": [], 00:15:36.376 "serial_number": "SPDK1", 00:15:36.376 "model_number": "SPDK bdev Controller", 00:15:36.376 "max_namespaces": 32, 00:15:36.376 "min_cntlid": 1, 00:15:36.376 "max_cntlid": 65519, 00:15:36.376 "namespaces": [ 00:15:36.376 { 00:15:36.376 "nsid": 1, 00:15:36.376 "bdev_name": "Malloc1", 00:15:36.376 "name": "Malloc1", 00:15:36.376 "nguid": "C346F5D968CD45C49445E9C7D80BC7D0", 00:15:36.377 "uuid": "c346f5d9-68cd-45c4-9445-e9c7d80bc7d0" 00:15:36.377 }, 00:15:36.377 { 00:15:36.377 "nsid": 2, 00:15:36.377 "bdev_name": "Malloc3", 00:15:36.377 "name": "Malloc3", 00:15:36.377 "nguid": "D33FCCDCB1414258B4EBA3A11458D232", 00:15:36.377 "uuid": "d33fccdc-b141-4258-b4eb-a3a11458d232" 00:15:36.377 } 00:15:36.377 ] 00:15:36.377 }, 00:15:36.377 { 00:15:36.377 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:36.377 "subtype": "NVMe", 00:15:36.377 "listen_addresses": [ 00:15:36.377 { 00:15:36.377 "trtype": "VFIOUSER", 00:15:36.377 "adrfam": "IPv4", 00:15:36.377 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:36.377 "trsvcid": "0" 00:15:36.377 } 00:15:36.377 ], 00:15:36.377 "allow_any_host": true, 00:15:36.377 "hosts": [], 00:15:36.377 "serial_number": "SPDK2", 00:15:36.377 "model_number": "SPDK bdev Controller", 00:15:36.377 "max_namespaces": 32, 00:15:36.377 "min_cntlid": 1, 00:15:36.377 "max_cntlid": 65519, 00:15:36.377 "namespaces": [ 00:15:36.377 { 00:15:36.377 "nsid": 1, 00:15:36.377 "bdev_name": "Malloc2", 00:15:36.377 "name": "Malloc2", 00:15:36.377 "nguid": "9C15DF385A4B436BB3F837564FE1D4B7", 00:15:36.377 "uuid": "9c15df38-5a4b-436b-b3f8-37564fe1d4b7" 00:15:36.377 } 00:15:36.377 ] 00:15:36.377 } 00:15:36.377 ] 00:15:36.377 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 402886 00:15:36.377 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.377 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:36.377 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:36.377 03:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:36.377 [2024-07-23 03:15:02.924062] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:36.377 [2024-07-23 03:15:02.924100] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402909 ] 00:15:36.377 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.636 [2024-07-23 03:15:02.956841] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:36.636 [2024-07-23 03:15:02.965945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:36.636 [2024-07-23 03:15:02.965990] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f67871d7000 00:15:36.636 [2024-07-23 03:15:02.966952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.636 [2024-07-23 03:15:02.967969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.636 [2024-07-23 03:15:02.968993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.636 [2024-07-23 03:15:02.970004] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.636 [2024-07-23 03:15:02.971001] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.636 [2024-07-23 03:15:02.971990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.636 [2024-07-23 03:15:02.972994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.636 [2024-07-23 03:15:02.974010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.636 [2024-07-23 03:15:02.975023] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:36.636 [2024-07-23 03:15:02.975045] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6785f89000 00:15:36.636 [2024-07-23 03:15:02.976157] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.636 [2024-07-23 03:15:02.991503] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:36.636 [2024-07-23 03:15:02.991537] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:36.636 [2024-07-23 03:15:02.996657] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:36.636 [2024-07-23 03:15:02.996715] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:36.636 [2024-07-23 03:15:02.996808] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:36.636 [2024-07-23 03:15:02.996833] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:36.636 [2024-07-23 03:15:02.996843] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:36.636 [2024-07-23 03:15:02.997684] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:36.636 [2024-07-23 03:15:02.997711] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:36.636 [2024-07-23 03:15:02.997725] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:36.636 [2024-07-23 03:15:02.998672] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:36.636 [2024-07-23 03:15:02.998693] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:36.636 [2024-07-23 03:15:02.998707] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:36.636 [2024-07-23 03:15:02.999675] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:36.636 [2024-07-23 03:15:02.999697] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:36.637 [2024-07-23 03:15:03.000686] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:36.637 [2024-07-23 03:15:03.000707] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:36.637 [2024-07-23 03:15:03.000716] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:36.637 [2024-07-23 03:15:03.000728] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:36.637 [2024-07-23 03:15:03.000838] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:36.637 [2024-07-23 03:15:03.000846] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:36.637 [2024-07-23 03:15:03.000855] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:36.637 [2024-07-23 03:15:03.001709] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:36.637 [2024-07-23 03:15:03.002694] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:36.637 [2024-07-23 03:15:03.003700] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:36.637 [2024-07-23 03:15:03.004693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.637 [2024-07-23 03:15:03.004776] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:36.637 [2024-07-23 03:15:03.005718] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:36.637 [2024-07-23 03:15:03.005738] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:36.637 [2024-07-23 03:15:03.005748] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.005773] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:36.637 [2024-07-23 03:15:03.005788] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.005812] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.637 [2024-07-23 03:15:03.005822] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.637 [2024-07-23 03:15:03.005842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.014631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.014673] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:36.637 [2024-07-23 03:15:03.014684] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:36.637 [2024-07-23 03:15:03.014692] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:36.637 [2024-07-23 03:15:03.014700] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:36.637 [2024-07-23 03:15:03.014708] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:36.637 [2024-07-23 03:15:03.014716] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:36.637 [2024-07-23 03:15:03.014725] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.014738] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.014755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.022623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.022650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.637 [2024-07-23 03:15:03.022663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.637 [2024-07-23 03:15:03.022680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.637 [2024-07-23 03:15:03.022694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.637 [2024-07-23 03:15:03.022703] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.022720] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.022735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.030626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.030646] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:36.637 [2024-07-23 03:15:03.030666] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.030678] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.030694] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.030708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.038624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.038700] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.038717] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.038731] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:36.637 [2024-07-23 03:15:03.038739] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:36.637 [2024-07-23 03:15:03.038749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.046627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.046651] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:36.637 [2024-07-23 03:15:03.046669] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.046683] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.046697] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.637 [2024-07-23 03:15:03.046705] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.637 [2024-07-23 03:15:03.046715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.054623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.054656] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.054673] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.054686] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.637 [2024-07-23 03:15:03.054695] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.637 [2024-07-23 03:15:03.054704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.062623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.062644] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.062672] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.062688] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.062699] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.062708] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.062717] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:36.637 [2024-07-23 03:15:03.062725] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:36.637 [2024-07-23 03:15:03.062734] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:36.637 [2024-07-23 03:15:03.062765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.070623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.070660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.078623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.078649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:36.637 [2024-07-23 03:15:03.086627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:36.637 [2024-07-23 03:15:03.086659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.638 [2024-07-23 03:15:03.094625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:36.638 [2024-07-23 03:15:03.094652] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:36.638 [2024-07-23 03:15:03.094662] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:36.638 [2024-07-23 03:15:03.094674] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:36.638 [2024-07-23 03:15:03.094680] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:36.638 [2024-07-23 03:15:03.094694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:36.638 [2024-07-23 03:15:03.094708] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:36.638 [2024-07-23 03:15:03.094716] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:36.638 [2024-07-23 03:15:03.094725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:36.638 [2024-07-23 03:15:03.094737] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:36.638 [2024-07-23 03:15:03.094745] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.638 [2024-07-23 03:15:03.094754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.638 [2024-07-23 03:15:03.094766] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:36.638 [2024-07-23 03:15:03.094775] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:36.638 [2024-07-23 03:15:03.094784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:36.638 [2024-07-23 03:15:03.102625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:36.638 [2024-07-23 03:15:03.102653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:36.638 [2024-07-23 03:15:03.102676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:36.638 [2024-07-23 03:15:03.102691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:36.638 ===================================================== 00:15:36.638 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.638 ===================================================== 00:15:36.638 Controller Capabilities/Features 00:15:36.638 ================================ 00:15:36.638 Vendor ID: 4e58 00:15:36.638 Subsystem Vendor ID: 4e58 00:15:36.638 Serial Number: SPDK2 00:15:36.638 Model Number: SPDK bdev Controller 00:15:36.638 Firmware Version: 24.05.1 00:15:36.638 Recommended Arb Burst: 6 00:15:36.638 IEEE OUI Identifier: 8d 6b 50 00:15:36.638 Multi-path I/O 00:15:36.638 May have multiple subsystem ports: Yes 00:15:36.638 May have multiple controllers: Yes 00:15:36.638 Associated with SR-IOV VF: No 00:15:36.638 Max Data Transfer Size: 131072 00:15:36.638 Max Number of Namespaces: 32 00:15:36.638 Max Number of I/O Queues: 127 00:15:36.638 NVMe Specification Version (VS): 1.3 00:15:36.638 NVMe Specification Version (Identify): 1.3 00:15:36.638 Maximum Queue Entries: 256 00:15:36.638 Contiguous Queues Required: Yes 00:15:36.638 Arbitration Mechanisms Supported 00:15:36.638 Weighted Round Robin: Not Supported 00:15:36.638 Vendor Specific: Not Supported 00:15:36.638 Reset Timeout: 15000 ms 00:15:36.638 Doorbell Stride: 4 bytes 00:15:36.638 NVM Subsystem Reset: Not Supported 00:15:36.638 Command Sets Supported 00:15:36.638 NVM Command Set: Supported 00:15:36.638 Boot Partition: Not Supported 00:15:36.638 Memory Page Size Minimum: 4096 bytes 00:15:36.638 Memory Page Size Maximum: 4096 bytes 00:15:36.638 Persistent Memory Region: Not Supported 00:15:36.638 Optional Asynchronous Events Supported 00:15:36.638 Namespace Attribute Notices: Supported 00:15:36.638 Firmware Activation Notices: Not Supported 00:15:36.638 ANA Change Notices: Not Supported 00:15:36.638 PLE Aggregate Log Change Notices: Not Supported 00:15:36.638 LBA Status Info Alert Notices: Not Supported 00:15:36.638 EGE Aggregate Log Change Notices: Not Supported 00:15:36.638 Normal NVM Subsystem Shutdown event: Not Supported 00:15:36.638 Zone Descriptor Change Notices: Not Supported 00:15:36.638 Discovery Log Change Notices: Not Supported 00:15:36.638 Controller Attributes 00:15:36.638 128-bit Host Identifier: Supported 00:15:36.638 Non-Operational Permissive Mode: Not Supported 00:15:36.638 NVM Sets: Not Supported 00:15:36.638 Read Recovery Levels: Not Supported 00:15:36.638 Endurance Groups: Not Supported 00:15:36.638 Predictable Latency Mode: Not Supported 00:15:36.638 Traffic Based Keep ALive: Not Supported 00:15:36.638 Namespace Granularity: Not Supported 00:15:36.638 SQ Associations: Not Supported 00:15:36.638 UUID List: Not Supported 00:15:36.638 Multi-Domain Subsystem: Not Supported 00:15:36.638 Fixed Capacity Management: Not Supported 00:15:36.638 Variable Capacity Management: Not Supported 00:15:36.638 Delete Endurance Group: Not Supported 00:15:36.638 Delete NVM Set: Not Supported 00:15:36.638 Extended LBA Formats Supported: Not Supported 00:15:36.638 Flexible Data Placement Supported: Not Supported 00:15:36.638 00:15:36.638 Controller Memory Buffer Support 00:15:36.638 ================================ 00:15:36.638 Supported: No 00:15:36.638 00:15:36.638 Persistent Memory Region Support 00:15:36.638 ================================ 00:15:36.638 Supported: No 00:15:36.638 00:15:36.638 Admin Command Set Attributes 00:15:36.638 ============================ 00:15:36.638 Security Send/Receive: Not Supported 00:15:36.638 Format NVM: Not Supported 00:15:36.638 Firmware Activate/Download: Not Supported 00:15:36.638 Namespace Management: Not Supported 00:15:36.638 Device Self-Test: Not Supported 00:15:36.638 Directives: Not Supported 00:15:36.638 NVMe-MI: Not Supported 00:15:36.638 Virtualization Management: Not Supported 00:15:36.638 Doorbell Buffer Config: Not Supported 00:15:36.638 Get LBA Status Capability: Not Supported 00:15:36.638 Command & Feature Lockdown Capability: Not Supported 00:15:36.638 Abort Command Limit: 4 00:15:36.638 Async Event Request Limit: 4 00:15:36.638 Number of Firmware Slots: N/A 00:15:36.638 Firmware Slot 1 Read-Only: N/A 00:15:36.638 Firmware Activation Without Reset: N/A 00:15:36.638 Multiple Update Detection Support: N/A 00:15:36.638 Firmware Update Granularity: No Information Provided 00:15:36.638 Per-Namespace SMART Log: No 00:15:36.638 Asymmetric Namespace Access Log Page: Not Supported 00:15:36.638 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:36.638 Command Effects Log Page: Supported 00:15:36.638 Get Log Page Extended Data: Supported 00:15:36.638 Telemetry Log Pages: Not Supported 00:15:36.638 Persistent Event Log Pages: Not Supported 00:15:36.638 Supported Log Pages Log Page: May Support 00:15:36.638 Commands Supported & Effects Log Page: Not Supported 00:15:36.638 Feature Identifiers & Effects Log Page:May Support 00:15:36.638 NVMe-MI Commands & Effects Log Page: May Support 00:15:36.638 Data Area 4 for Telemetry Log: Not Supported 00:15:36.638 Error Log Page Entries Supported: 128 00:15:36.638 Keep Alive: Supported 00:15:36.638 Keep Alive Granularity: 10000 ms 00:15:36.638 00:15:36.638 NVM Command Set Attributes 00:15:36.638 ========================== 00:15:36.638 Submission Queue Entry Size 00:15:36.638 Max: 64 00:15:36.638 Min: 64 00:15:36.638 Completion Queue Entry Size 00:15:36.638 Max: 16 00:15:36.638 Min: 16 00:15:36.638 Number of Namespaces: 32 00:15:36.638 Compare Command: Supported 00:15:36.638 Write Uncorrectable Command: Not Supported 00:15:36.638 Dataset Management Command: Supported 00:15:36.638 Write Zeroes Command: Supported 00:15:36.638 Set Features Save Field: Not Supported 00:15:36.638 Reservations: Not Supported 00:15:36.638 Timestamp: Not Supported 00:15:36.638 Copy: Supported 00:15:36.638 Volatile Write Cache: Present 00:15:36.638 Atomic Write Unit (Normal): 1 00:15:36.638 Atomic Write Unit (PFail): 1 00:15:36.638 Atomic Compare & Write Unit: 1 00:15:36.638 Fused Compare & Write: Supported 00:15:36.638 Scatter-Gather List 00:15:36.638 SGL Command Set: Supported (Dword aligned) 00:15:36.638 SGL Keyed: Not Supported 00:15:36.638 SGL Bit Bucket Descriptor: Not Supported 00:15:36.638 SGL Metadata Pointer: Not Supported 00:15:36.638 Oversized SGL: Not Supported 00:15:36.638 SGL Metadata Address: Not Supported 00:15:36.638 SGL Offset: Not Supported 00:15:36.638 Transport SGL Data Block: Not Supported 00:15:36.638 Replay Protected Memory Block: Not Supported 00:15:36.638 00:15:36.638 Firmware Slot Information 00:15:36.638 ========================= 00:15:36.638 Active slot: 1 00:15:36.638 Slot 1 Firmware Revision: 24.05.1 00:15:36.638 00:15:36.638 00:15:36.638 Commands Supported and Effects 00:15:36.638 ============================== 00:15:36.638 Admin Commands 00:15:36.638 -------------- 00:15:36.638 Get Log Page (02h): Supported 00:15:36.638 Identify (06h): Supported 00:15:36.638 Abort (08h): Supported 00:15:36.639 Set Features (09h): Supported 00:15:36.639 Get Features (0Ah): Supported 00:15:36.639 Asynchronous Event Request (0Ch): Supported 00:15:36.639 Keep Alive (18h): Supported 00:15:36.639 I/O Commands 00:15:36.639 ------------ 00:15:36.639 Flush (00h): Supported LBA-Change 00:15:36.639 Write (01h): Supported LBA-Change 00:15:36.639 Read (02h): Supported 00:15:36.639 Compare (05h): Supported 00:15:36.639 Write Zeroes (08h): Supported LBA-Change 00:15:36.639 Dataset Management (09h): Supported LBA-Change 00:15:36.639 Copy (19h): Supported LBA-Change 00:15:36.639 Unknown (79h): Supported LBA-Change 00:15:36.639 Unknown (7Ah): Supported 00:15:36.639 00:15:36.639 Error Log 00:15:36.639 ========= 00:15:36.639 00:15:36.639 Arbitration 00:15:36.639 =========== 00:15:36.639 Arbitration Burst: 1 00:15:36.639 00:15:36.639 Power Management 00:15:36.639 ================ 00:15:36.639 Number of Power States: 1 00:15:36.639 Current Power State: Power State #0 00:15:36.639 Power State #0: 00:15:36.639 Max Power: 0.00 W 00:15:36.639 Non-Operational State: Operational 00:15:36.639 Entry Latency: Not Reported 00:15:36.639 Exit Latency: Not Reported 00:15:36.639 Relative Read Throughput: 0 00:15:36.639 Relative Read Latency: 0 00:15:36.639 Relative Write Throughput: 0 00:15:36.639 Relative Write Latency: 0 00:15:36.639 Idle Power: Not Reported 00:15:36.639 Active Power: Not Reported 00:15:36.639 Non-Operational Permissive Mode: Not Supported 00:15:36.639 00:15:36.639 Health Information 00:15:36.639 ================== 00:15:36.639 Critical Warnings: 00:15:36.639 Available Spare Space: OK 00:15:36.639 Temperature: OK 00:15:36.639 Device Reliability: OK 00:15:36.639 Read Only: No 00:15:36.639 Volatile Memory Backup: OK 00:15:36.639 Current Temperature: 0 Kelvin[2024-07-23 03:15:03.102811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:36.639 [2024-07-23 03:15:03.110622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:36.639 [2024-07-23 03:15:03.110679] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:36.639 [2024-07-23 03:15:03.110697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.639 [2024-07-23 03:15:03.110708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.639 [2024-07-23 03:15:03.110719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.639 [2024-07-23 03:15:03.110729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.639 [2024-07-23 03:15:03.110809] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:36.639 [2024-07-23 03:15:03.110831] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:36.639 [2024-07-23 03:15:03.111810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.639 [2024-07-23 03:15:03.111894] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:36.639 [2024-07-23 03:15:03.111925] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:36.639 [2024-07-23 03:15:03.112813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:36.639 [2024-07-23 03:15:03.112843] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:36.639 [2024-07-23 03:15:03.112910] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:36.639 [2024-07-23 03:15:03.114090] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.639 (-273 Celsius) 00:15:36.639 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:36.639 Available Spare: 0% 00:15:36.639 Available Spare Threshold: 0% 00:15:36.639 Life Percentage Used: 0% 00:15:36.639 Data Units Read: 0 00:15:36.639 Data Units Written: 0 00:15:36.639 Host Read Commands: 0 00:15:36.639 Host Write Commands: 0 00:15:36.639 Controller Busy Time: 0 minutes 00:15:36.639 Power Cycles: 0 00:15:36.639 Power On Hours: 0 hours 00:15:36.639 Unsafe Shutdowns: 0 00:15:36.639 Unrecoverable Media Errors: 0 00:15:36.639 Lifetime Error Log Entries: 0 00:15:36.639 Warning Temperature Time: 0 minutes 00:15:36.639 Critical Temperature Time: 0 minutes 00:15:36.639 00:15:36.639 Number of Queues 00:15:36.639 ================ 00:15:36.639 Number of I/O Submission Queues: 127 00:15:36.639 Number of I/O Completion Queues: 127 00:15:36.639 00:15:36.639 Active Namespaces 00:15:36.639 ================= 00:15:36.639 Namespace ID:1 00:15:36.639 Error Recovery Timeout: Unlimited 00:15:36.639 Command Set Identifier: NVM (00h) 00:15:36.639 Deallocate: Supported 00:15:36.639 Deallocated/Unwritten Error: Not Supported 00:15:36.639 Deallocated Read Value: Unknown 00:15:36.639 Deallocate in Write Zeroes: Not Supported 00:15:36.639 Deallocated Guard Field: 0xFFFF 00:15:36.639 Flush: Supported 00:15:36.639 Reservation: Supported 00:15:36.639 Namespace Sharing Capabilities: Multiple Controllers 00:15:36.639 Size (in LBAs): 131072 (0GiB) 00:15:36.639 Capacity (in LBAs): 131072 (0GiB) 00:15:36.639 Utilization (in LBAs): 131072 (0GiB) 00:15:36.639 NGUID: 9C15DF385A4B436BB3F837564FE1D4B7 00:15:36.639 UUID: 9c15df38-5a4b-436b-b3f8-37564fe1d4b7 00:15:36.639 Thin Provisioning: Not Supported 00:15:36.639 Per-NS Atomic Units: Yes 00:15:36.639 Atomic Boundary Size (Normal): 0 00:15:36.639 Atomic Boundary Size (PFail): 0 00:15:36.639 Atomic Boundary Offset: 0 00:15:36.639 Maximum Single Source Range Length: 65535 00:15:36.639 Maximum Copy Length: 65535 00:15:36.639 Maximum Source Range Count: 1 00:15:36.639 NGUID/EUI64 Never Reused: No 00:15:36.639 Namespace Write Protected: No 00:15:36.639 Number of LBA Formats: 1 00:15:36.639 Current LBA Format: LBA Format #00 00:15:36.639 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.639 00:15:36.639 03:15:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:36.639 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.900 [2024-07-23 03:15:03.345409] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:42.164 Initializing NVMe Controllers 00:15:42.164 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:42.164 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:42.164 Initialization complete. Launching workers. 00:15:42.164 ======================================================== 00:15:42.164 Latency(us) 00:15:42.164 Device Information : IOPS MiB/s Average min max 00:15:42.164 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35902.97 140.25 3564.20 1139.03 7342.82 00:15:42.164 ======================================================== 00:15:42.164 Total : 35902.97 140.25 3564.20 1139.03 7342.82 00:15:42.164 00:15:42.164 [2024-07-23 03:15:08.450960] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:42.164 03:15:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:42.164 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.164 [2024-07-23 03:15:08.681681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:47.427 Initializing NVMe Controllers 00:15:47.427 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:47.427 Initialization complete. Launching workers. 00:15:47.427 ======================================================== 00:15:47.427 Latency(us) 00:15:47.427 Device Information : IOPS MiB/s Average min max 00:15:47.427 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33586.03 131.20 3810.60 1188.44 8314.70 00:15:47.427 ======================================================== 00:15:47.427 Total : 33586.03 131.20 3810.60 1188.44 8314.70 00:15:47.427 00:15:47.427 [2024-07-23 03:15:13.704997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:47.427 03:15:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:47.427 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.427 [2024-07-23 03:15:13.914835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:52.750 [2024-07-23 03:15:19.054781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:52.750 Initializing NVMe Controllers 00:15:52.751 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.751 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:52.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:52.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:52.751 Initialization complete. Launching workers. 00:15:52.751 Starting thread on core 2 00:15:52.751 Starting thread on core 3 00:15:52.751 Starting thread on core 1 00:15:52.751 03:15:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:52.751 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.008 [2024-07-23 03:15:19.346440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:56.288 [2024-07-23 03:15:22.402679] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:56.288 Initializing NVMe Controllers 00:15:56.288 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:56.288 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:56.288 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:56.288 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:56.288 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:56.288 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:56.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:56.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:56.288 Initialization complete. Launching workers. 00:15:56.288 Starting thread on core 1 with urgent priority queue 00:15:56.288 Starting thread on core 2 with urgent priority queue 00:15:56.288 Starting thread on core 3 with urgent priority queue 00:15:56.288 Starting thread on core 0 with urgent priority queue 00:15:56.288 SPDK bdev Controller (SPDK2 ) core 0: 1719.33 IO/s 58.16 secs/100000 ios 00:15:56.288 SPDK bdev Controller (SPDK2 ) core 1: 1791.67 IO/s 55.81 secs/100000 ios 00:15:56.288 SPDK bdev Controller (SPDK2 ) core 2: 1849.00 IO/s 54.08 secs/100000 ios 00:15:56.288 SPDK bdev Controller (SPDK2 ) core 3: 1750.00 IO/s 57.14 secs/100000 ios 00:15:56.288 ======================================================== 00:15:56.288 00:15:56.288 03:15:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:56.288 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.288 [2024-07-23 03:15:22.687899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:56.288 Initializing NVMe Controllers 00:15:56.288 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:56.288 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:56.288 Namespace ID: 1 size: 0GB 00:15:56.288 Initialization complete. 00:15:56.288 INFO: using host memory buffer for IO 00:15:56.288 Hello world! 00:15:56.288 [2024-07-23 03:15:22.696094] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:56.288 03:15:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:56.288 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.545 [2024-07-23 03:15:22.988393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.918 Initializing NVMe Controllers 00:15:57.918 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.918 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.918 Initialization complete. Launching workers. 00:15:57.918 submit (in ns) avg, min, max = 5735.0, 3500.0, 4002752.2 00:15:57.918 complete (in ns) avg, min, max = 27007.0, 2055.6, 6993645.6 00:15:57.918 00:15:57.918 Submit histogram 00:15:57.918 ================ 00:15:57.918 Range in us Cumulative Count 00:15:57.918 3.484 - 3.508: 0.0744% ( 10) 00:15:57.918 3.508 - 3.532: 0.9081% ( 112) 00:15:57.918 3.532 - 3.556: 2.5011% ( 214) 00:15:57.918 3.556 - 3.579: 6.0518% ( 477) 00:15:57.918 3.579 - 3.603: 12.1929% ( 825) 00:15:57.918 3.603 - 3.627: 21.3488% ( 1230) 00:15:57.918 3.627 - 3.650: 30.0506% ( 1169) 00:15:57.918 3.650 - 3.674: 38.2016% ( 1095) 00:15:57.918 3.674 - 3.698: 44.8712% ( 896) 00:15:57.918 3.698 - 3.721: 51.5706% ( 900) 00:15:57.918 3.721 - 3.745: 56.5133% ( 664) 00:15:57.918 3.745 - 3.769: 60.9796% ( 600) 00:15:57.918 3.769 - 3.793: 64.4038% ( 460) 00:15:57.918 3.793 - 3.816: 67.3813% ( 400) 00:15:57.918 3.816 - 3.840: 71.0585% ( 494) 00:15:57.918 3.840 - 3.864: 75.3387% ( 575) 00:15:57.918 3.864 - 3.887: 79.0531% ( 499) 00:15:57.918 3.887 - 3.911: 82.3731% ( 446) 00:15:57.918 3.911 - 3.935: 85.2613% ( 388) 00:15:57.918 3.935 - 3.959: 87.4200% ( 290) 00:15:57.918 3.959 - 3.982: 89.3554% ( 260) 00:15:57.918 3.982 - 4.006: 90.7474% ( 187) 00:15:57.918 4.006 - 4.030: 91.7299% ( 132) 00:15:57.918 4.030 - 4.053: 92.7200% ( 133) 00:15:57.918 4.053 - 4.077: 93.5462% ( 111) 00:15:57.918 4.077 - 4.101: 94.3129% ( 103) 00:15:57.918 4.101 - 4.124: 94.9754% ( 89) 00:15:57.918 4.124 - 4.148: 95.4593% ( 65) 00:15:57.918 4.148 - 4.172: 95.9580% ( 67) 00:15:57.918 4.172 - 4.196: 96.3079% ( 47) 00:15:57.918 4.196 - 4.219: 96.5610% ( 34) 00:15:57.918 4.219 - 4.243: 96.6503% ( 12) 00:15:57.918 4.243 - 4.267: 96.7471% ( 13) 00:15:57.918 4.267 - 4.290: 96.8736% ( 17) 00:15:57.918 4.290 - 4.314: 97.0001% ( 17) 00:15:57.918 4.314 - 4.338: 97.1118% ( 15) 00:15:57.918 4.338 - 4.361: 97.2011% ( 12) 00:15:57.918 4.361 - 4.385: 97.2756% ( 10) 00:15:57.918 4.385 - 4.409: 97.2905% ( 2) 00:15:57.918 4.409 - 4.433: 97.3128% ( 3) 00:15:57.918 4.433 - 4.456: 97.3351% ( 3) 00:15:57.918 4.456 - 4.480: 97.3649% ( 4) 00:15:57.918 4.480 - 4.504: 97.3798% ( 2) 00:15:57.918 4.504 - 4.527: 97.4096% ( 4) 00:15:57.918 4.551 - 4.575: 97.4244% ( 2) 00:15:57.918 4.575 - 4.599: 97.4319% ( 1) 00:15:57.918 4.599 - 4.622: 97.4468% ( 2) 00:15:57.918 4.622 - 4.646: 97.4542% ( 1) 00:15:57.918 4.670 - 4.693: 97.4617% ( 1) 00:15:57.918 4.717 - 4.741: 97.4691% ( 1) 00:15:57.918 4.764 - 4.788: 97.4840% ( 2) 00:15:57.918 4.788 - 4.812: 97.4914% ( 1) 00:15:57.918 4.812 - 4.836: 97.4989% ( 1) 00:15:57.918 4.836 - 4.859: 97.5361% ( 5) 00:15:57.918 4.859 - 4.883: 97.5957% ( 8) 00:15:57.918 4.883 - 4.907: 97.6478% ( 7) 00:15:57.918 4.907 - 4.930: 97.7148% ( 9) 00:15:57.918 4.930 - 4.954: 97.7296% ( 2) 00:15:57.918 4.954 - 4.978: 97.7743% ( 6) 00:15:57.918 4.978 - 5.001: 97.8562% ( 11) 00:15:57.918 5.001 - 5.025: 97.8934% ( 5) 00:15:57.918 5.025 - 5.049: 97.9455% ( 7) 00:15:57.918 5.049 - 5.073: 97.9976% ( 7) 00:15:57.918 5.073 - 5.096: 98.0423% ( 6) 00:15:57.918 5.096 - 5.120: 98.0646% ( 3) 00:15:57.918 5.120 - 5.144: 98.0795% ( 2) 00:15:57.918 5.144 - 5.167: 98.1242% ( 6) 00:15:57.918 5.167 - 5.191: 98.1465% ( 3) 00:15:57.918 5.191 - 5.215: 98.1837% ( 5) 00:15:57.918 5.215 - 5.239: 98.1986% ( 2) 00:15:57.918 5.239 - 5.262: 98.2135% ( 2) 00:15:57.918 5.262 - 5.286: 98.2284% ( 2) 00:15:57.918 5.286 - 5.310: 98.2582% ( 4) 00:15:57.918 5.333 - 5.357: 98.2805% ( 3) 00:15:57.918 5.381 - 5.404: 98.2879% ( 1) 00:15:57.918 5.404 - 5.428: 98.2954% ( 1) 00:15:57.918 5.428 - 5.452: 98.3028% ( 1) 00:15:57.918 5.547 - 5.570: 98.3103% ( 1) 00:15:57.919 5.594 - 5.618: 98.3177% ( 1) 00:15:57.919 5.831 - 5.855: 98.3251% ( 1) 00:15:57.919 5.950 - 5.973: 98.3475% ( 3) 00:15:57.919 6.068 - 6.116: 98.3549% ( 1) 00:15:57.919 6.163 - 6.210: 98.3624% ( 1) 00:15:57.919 6.258 - 6.305: 98.3847% ( 3) 00:15:57.919 6.353 - 6.400: 98.3921% ( 1) 00:15:57.919 6.400 - 6.447: 98.3996% ( 1) 00:15:57.919 6.495 - 6.542: 98.4070% ( 1) 00:15:57.919 6.542 - 6.590: 98.4145% ( 1) 00:15:57.919 6.732 - 6.779: 98.4294% ( 2) 00:15:57.919 6.779 - 6.827: 98.4368% ( 1) 00:15:57.919 6.827 - 6.874: 98.4442% ( 1) 00:15:57.919 7.016 - 7.064: 98.4517% ( 1) 00:15:57.919 7.301 - 7.348: 98.4591% ( 1) 00:15:57.919 7.396 - 7.443: 98.4740% ( 2) 00:15:57.919 7.490 - 7.538: 98.4815% ( 1) 00:15:57.919 7.538 - 7.585: 98.4964% ( 2) 00:15:57.919 7.585 - 7.633: 98.5038% ( 1) 00:15:57.919 7.775 - 7.822: 98.5261% ( 3) 00:15:57.919 7.870 - 7.917: 98.5336% ( 1) 00:15:57.919 7.917 - 7.964: 98.5410% ( 1) 00:15:57.919 7.964 - 8.012: 98.5485% ( 1) 00:15:57.919 8.012 - 8.059: 98.5633% ( 2) 00:15:57.919 8.059 - 8.107: 98.5782% ( 2) 00:15:57.919 8.107 - 8.154: 98.5857% ( 1) 00:15:57.919 8.154 - 8.201: 98.5931% ( 1) 00:15:57.919 8.249 - 8.296: 98.6006% ( 1) 00:15:57.919 8.296 - 8.344: 98.6229% ( 3) 00:15:57.919 8.344 - 8.391: 98.6303% ( 1) 00:15:57.919 8.391 - 8.439: 98.6378% ( 1) 00:15:57.919 8.770 - 8.818: 98.6527% ( 2) 00:15:57.919 8.865 - 8.913: 98.6601% ( 1) 00:15:57.919 8.960 - 9.007: 98.6676% ( 1) 00:15:57.919 9.102 - 9.150: 98.6750% ( 1) 00:15:57.919 9.150 - 9.197: 98.6824% ( 1) 00:15:57.919 9.244 - 9.292: 98.6899% ( 1) 00:15:57.919 9.481 - 9.529: 98.6973% ( 1) 00:15:57.919 9.529 - 9.576: 98.7048% ( 1) 00:15:57.919 9.576 - 9.624: 98.7122% ( 1) 00:15:57.919 9.861 - 9.908: 98.7197% ( 1) 00:15:57.919 10.193 - 10.240: 98.7271% ( 1) 00:15:57.919 10.240 - 10.287: 98.7346% ( 1) 00:15:57.919 10.335 - 10.382: 98.7494% ( 2) 00:15:57.919 10.382 - 10.430: 98.7569% ( 1) 00:15:57.919 10.524 - 10.572: 98.7643% ( 1) 00:15:57.919 10.572 - 10.619: 98.7718% ( 1) 00:15:57.919 10.999 - 11.046: 98.7792% ( 1) 00:15:57.919 11.378 - 11.425: 98.7867% ( 1) 00:15:57.919 11.567 - 11.615: 98.7941% ( 1) 00:15:57.919 11.615 - 11.662: 98.8090% ( 2) 00:15:57.919 11.947 - 11.994: 98.8239% ( 2) 00:15:57.919 11.994 - 12.041: 98.8313% ( 1) 00:15:57.919 12.421 - 12.516: 98.8388% ( 1) 00:15:57.919 12.610 - 12.705: 98.8537% ( 2) 00:15:57.919 12.895 - 12.990: 98.8611% ( 1) 00:15:57.919 13.369 - 13.464: 98.8685% ( 1) 00:15:57.919 13.464 - 13.559: 98.8834% ( 2) 00:15:57.919 13.559 - 13.653: 98.8909% ( 1) 00:15:57.919 13.843 - 13.938: 98.8983% ( 1) 00:15:57.919 13.938 - 14.033: 98.9058% ( 1) 00:15:57.919 14.127 - 14.222: 98.9206% ( 2) 00:15:57.919 14.317 - 14.412: 98.9281% ( 1) 00:15:57.919 14.412 - 14.507: 98.9355% ( 1) 00:15:57.919 14.507 - 14.601: 98.9430% ( 1) 00:15:57.919 14.696 - 14.791: 98.9504% ( 1) 00:15:57.919 14.886 - 14.981: 98.9579% ( 1) 00:15:57.919 15.170 - 15.265: 98.9653% ( 1) 00:15:57.919 16.972 - 17.067: 98.9728% ( 1) 00:15:57.919 17.256 - 17.351: 98.9876% ( 2) 00:15:57.919 17.351 - 17.446: 99.0323% ( 6) 00:15:57.919 17.446 - 17.541: 99.0621% ( 4) 00:15:57.919 17.541 - 17.636: 99.0844% ( 3) 00:15:57.919 17.636 - 17.730: 99.1142% ( 4) 00:15:57.919 17.730 - 17.825: 99.1514% ( 5) 00:15:57.919 17.825 - 17.920: 99.1961% ( 6) 00:15:57.919 17.920 - 18.015: 99.2556% ( 8) 00:15:57.919 18.015 - 18.110: 99.3152% ( 8) 00:15:57.919 18.110 - 18.204: 99.4343% ( 16) 00:15:57.919 18.204 - 18.299: 99.5236% ( 12) 00:15:57.919 18.299 - 18.394: 99.5906% ( 9) 00:15:57.919 18.394 - 18.489: 99.6576% ( 9) 00:15:57.919 18.489 - 18.584: 99.6874% ( 4) 00:15:57.919 18.584 - 18.679: 99.7246% ( 5) 00:15:57.919 18.679 - 18.773: 99.7618% ( 5) 00:15:57.919 18.773 - 18.868: 99.8065% ( 6) 00:15:57.919 18.868 - 18.963: 99.8213% ( 2) 00:15:57.919 19.058 - 19.153: 99.8362% ( 2) 00:15:57.919 19.153 - 19.247: 99.8511% ( 2) 00:15:57.919 19.247 - 19.342: 99.8586% ( 1) 00:15:57.919 19.342 - 19.437: 99.8660% ( 1) 00:15:57.919 19.437 - 19.532: 99.8735% ( 1) 00:15:57.919 19.532 - 19.627: 99.8809% ( 1) 00:15:57.919 19.816 - 19.911: 99.8883% ( 1) 00:15:57.919 19.911 - 20.006: 99.8958% ( 1) 00:15:57.919 20.290 - 20.385: 99.9032% ( 1) 00:15:57.919 20.954 - 21.049: 99.9107% ( 1) 00:15:57.919 24.178 - 24.273: 99.9181% ( 1) 00:15:57.919 26.927 - 27.117: 99.9256% ( 1) 00:15:57.919 27.307 - 27.496: 99.9330% ( 1) 00:15:57.919 28.444 - 28.634: 99.9479% ( 2) 00:15:57.919 33.564 - 33.754: 99.9553% ( 1) 00:15:57.919 3980.705 - 4004.978: 100.0000% ( 6) 00:15:57.919 00:15:57.919 Complete histogram 00:15:57.919 ================== 00:15:57.919 Range in us Cumulative Count 00:15:57.919 2.050 - 2.062: 0.4243% ( 57) 00:15:57.919 2.062 - 2.074: 28.7107% ( 3800) 00:15:57.919 2.074 - 2.086: 40.4570% ( 1578) 00:15:57.919 2.086 - 2.098: 43.2634% ( 377) 00:15:57.919 2.098 - 2.110: 56.9897% ( 1844) 00:15:57.919 2.110 - 2.121: 61.1061% ( 553) 00:15:57.919 2.121 - 2.133: 64.5303% ( 460) 00:15:57.919 2.133 - 2.145: 73.9541% ( 1266) 00:15:57.919 2.145 - 2.157: 75.7555% ( 242) 00:15:57.919 2.157 - 2.169: 77.8026% ( 275) 00:15:57.919 2.169 - 2.181: 81.3310% ( 474) 00:15:57.919 2.181 - 2.193: 82.3061% ( 131) 00:15:57.919 2.193 - 2.204: 83.7948% ( 200) 00:15:57.919 2.204 - 2.216: 87.7475% ( 531) 00:15:57.919 2.216 - 2.228: 89.8094% ( 277) 00:15:57.919 2.228 - 2.240: 91.6332% ( 245) 00:15:57.919 2.240 - 2.252: 93.4792% ( 248) 00:15:57.919 2.252 - 2.264: 93.9556% ( 64) 00:15:57.919 2.264 - 2.276: 94.1864% ( 31) 00:15:57.919 2.276 - 2.287: 94.5437% ( 48) 00:15:57.919 2.287 - 2.299: 94.9680% ( 57) 00:15:57.919 2.299 - 2.311: 95.4295% ( 62) 00:15:57.919 2.311 - 2.323: 95.6007% ( 23) 00:15:57.919 2.323 - 2.335: 95.6603% ( 8) 00:15:57.919 2.335 - 2.347: 95.7347% ( 10) 00:15:57.919 2.347 - 2.359: 95.7868% ( 7) 00:15:57.919 2.359 - 2.370: 96.0027% ( 29) 00:15:57.919 2.370 - 2.382: 96.2781% ( 37) 00:15:57.919 2.382 - 2.394: 96.6428% ( 49) 00:15:57.919 2.394 - 2.406: 96.9183% ( 37) 00:15:57.919 2.406 - 2.418: 97.1118% ( 26) 00:15:57.919 2.418 - 2.430: 97.3128% ( 27) 00:15:57.919 2.430 - 2.441: 97.5361% ( 30) 00:15:57.919 2.441 - 2.453: 97.6701% ( 18) 00:15:57.919 2.453 - 2.465: 97.8264% ( 21) 00:15:57.919 2.465 - 2.477: 97.9232% ( 13) 00:15:57.919 2.477 - 2.489: 97.9976% ( 10) 00:15:57.919 2.489 - 2.501: 98.0869% ( 12) 00:15:57.919 2.501 - 2.513: 98.1763% ( 12) 00:15:57.919 2.513 - 2.524: 98.1986% ( 3) 00:15:57.919 2.524 - 2.536: 98.2284% ( 4) 00:15:57.919 2.536 - 2.548: 98.2879% ( 8) 00:15:57.919 2.548 - 2.560: 98.3103% ( 3) 00:15:57.919 2.560 - 2.572: 98.3177% ( 1) 00:15:57.919 2.572 - 2.584: 98.3251% ( 1) 00:15:57.919 2.631 - 2.643: 98.3326% ( 1) 00:15:57.919 2.714 - 2.726: 98.3400% ( 1) 00:15:57.919 2.773 - 2.785: 98.3475% ( 1) 00:15:57.919 2.844 - 2.856: 98.3549% ( 1) 00:15:57.919 3.342 - 3.366: 98.3624% ( 1) 00:15:57.919 3.413 - 3.437: 98.3698% ( 1) 00:15:57.919 3.556 - 3.579: 98.3773% ( 1) 00:15:57.919 3.579 - 3.603: 98.3847% ( 1) 00:15:57.919 3.603 - 3.627: 98.4070% ( 3) 00:15:57.919 3.627 - 3.650: 98.4145% ( 1) 00:15:57.919 3.650 - 3.674: 98.4368% ( 3) 00:15:57.919 3.698 - 3.721: 98.4517% ( 2) 00:15:57.919 3.745 - 3.769: 98.4591% ( 1) 00:15:57.919 3.769 - 3.793: 98.4815% ( 3) 00:15:57.919 3.793 - 3.816: 98.4964% ( 2) 00:15:57.919 3.816 - 3.840: 98.5038% ( 1) 00:15:57.919 3.840 - 3.864: 98.5112% ( 1) 00:15:57.919 3.864 - 3.887: 98.5261% ( 2) 00:15:57.919 3.911 - 3.935: 98.5410% ( 2) 00:15:57.919 3.935 - 3.959: 98.5485% ( 1) 00:15:57.919 3.959 - 3.982: 98.5633% ( 2) 00:15:57.919 4.006 - 4.030: 98.5708% ( 1) 00:15:57.919 4.053 - 4.077: 98.5931% ( 3) 00:15:57.919 4.124 - 4.148: 98.6006% ( 1) 00:15:57.919 4.148 - 4.172: 98.6155% ( 2) 00:15:57.919 4.196 - 4.219: 9[2024-07-23 03:15:24.092414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.919 8.6229% ( 1) 00:15:57.919 4.243 - 4.267: 98.6303% ( 1) 00:15:57.919 4.314 - 4.338: 98.6378% ( 1) 00:15:57.919 4.338 - 4.361: 98.6452% ( 1) 00:15:57.919 5.096 - 5.120: 98.6527% ( 1) 00:15:57.920 5.191 - 5.215: 98.6601% ( 1) 00:15:57.920 5.381 - 5.404: 98.6676% ( 1) 00:15:57.920 5.404 - 5.428: 98.6750% ( 1) 00:15:57.920 5.428 - 5.452: 98.6824% ( 1) 00:15:57.920 5.641 - 5.665: 98.6899% ( 1) 00:15:57.920 5.665 - 5.689: 98.6973% ( 1) 00:15:57.920 5.736 - 5.760: 98.7048% ( 1) 00:15:57.920 5.831 - 5.855: 98.7197% ( 2) 00:15:57.920 6.210 - 6.258: 98.7271% ( 1) 00:15:57.920 6.258 - 6.305: 98.7346% ( 1) 00:15:57.920 6.305 - 6.353: 98.7420% ( 1) 00:15:57.920 6.495 - 6.542: 98.7494% ( 1) 00:15:57.920 6.732 - 6.779: 98.7718% ( 3) 00:15:57.920 6.827 - 6.874: 98.7867% ( 2) 00:15:57.920 6.874 - 6.921: 98.7941% ( 1) 00:15:57.920 6.969 - 7.016: 98.8015% ( 1) 00:15:57.920 7.064 - 7.111: 98.8090% ( 1) 00:15:57.920 7.538 - 7.585: 98.8164% ( 1) 00:15:57.920 7.633 - 7.680: 98.8239% ( 1) 00:15:57.920 7.775 - 7.822: 98.8313% ( 1) 00:15:57.920 15.455 - 15.550: 98.8462% ( 2) 00:15:57.920 15.550 - 15.644: 98.8611% ( 2) 00:15:57.920 15.644 - 15.739: 98.8760% ( 2) 00:15:57.920 15.739 - 15.834: 98.8983% ( 3) 00:15:57.920 15.834 - 15.929: 98.9430% ( 6) 00:15:57.920 16.024 - 16.119: 98.9579% ( 2) 00:15:57.920 16.119 - 16.213: 98.9951% ( 5) 00:15:57.920 16.213 - 16.308: 99.0100% ( 2) 00:15:57.920 16.403 - 16.498: 99.0770% ( 9) 00:15:57.920 16.498 - 16.593: 99.1514% ( 10) 00:15:57.920 16.593 - 16.687: 99.2110% ( 8) 00:15:57.920 16.687 - 16.782: 99.2333% ( 3) 00:15:57.920 16.782 - 16.877: 99.2556% ( 3) 00:15:57.920 16.877 - 16.972: 99.2705% ( 2) 00:15:57.920 16.972 - 17.067: 99.2928% ( 3) 00:15:57.920 17.067 - 17.161: 99.3077% ( 2) 00:15:57.920 17.161 - 17.256: 99.3375% ( 4) 00:15:57.920 17.256 - 17.351: 99.3598% ( 3) 00:15:57.920 17.636 - 17.730: 99.3747% ( 2) 00:15:57.920 17.730 - 17.825: 99.3896% ( 2) 00:15:57.920 3980.705 - 4004.978: 99.8809% ( 66) 00:15:57.920 4004.978 - 4029.250: 99.9851% ( 14) 00:15:57.920 5995.330 - 6019.603: 99.9926% ( 1) 00:15:57.920 6990.507 - 7039.052: 100.0000% ( 1) 00:15:57.920 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.920 [ 00:15:57.920 { 00:15:57.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.920 "subtype": "Discovery", 00:15:57.920 "listen_addresses": [], 00:15:57.920 "allow_any_host": true, 00:15:57.920 "hosts": [] 00:15:57.920 }, 00:15:57.920 { 00:15:57.920 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.920 "subtype": "NVMe", 00:15:57.920 "listen_addresses": [ 00:15:57.920 { 00:15:57.920 "trtype": "VFIOUSER", 00:15:57.920 "adrfam": "IPv4", 00:15:57.920 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.920 "trsvcid": "0" 00:15:57.920 } 00:15:57.920 ], 00:15:57.920 "allow_any_host": true, 00:15:57.920 "hosts": [], 00:15:57.920 "serial_number": "SPDK1", 00:15:57.920 "model_number": "SPDK bdev Controller", 00:15:57.920 "max_namespaces": 32, 00:15:57.920 "min_cntlid": 1, 00:15:57.920 "max_cntlid": 65519, 00:15:57.920 "namespaces": [ 00:15:57.920 { 00:15:57.920 "nsid": 1, 00:15:57.920 "bdev_name": "Malloc1", 00:15:57.920 "name": "Malloc1", 00:15:57.920 "nguid": "C346F5D968CD45C49445E9C7D80BC7D0", 00:15:57.920 "uuid": "c346f5d9-68cd-45c4-9445-e9c7d80bc7d0" 00:15:57.920 }, 00:15:57.920 { 00:15:57.920 "nsid": 2, 00:15:57.920 "bdev_name": "Malloc3", 00:15:57.920 "name": "Malloc3", 00:15:57.920 "nguid": "D33FCCDCB1414258B4EBA3A11458D232", 00:15:57.920 "uuid": "d33fccdc-b141-4258-b4eb-a3a11458d232" 00:15:57.920 } 00:15:57.920 ] 00:15:57.920 }, 00:15:57.920 { 00:15:57.920 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.920 "subtype": "NVMe", 00:15:57.920 "listen_addresses": [ 00:15:57.920 { 00:15:57.920 "trtype": "VFIOUSER", 00:15:57.920 "adrfam": "IPv4", 00:15:57.920 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.920 "trsvcid": "0" 00:15:57.920 } 00:15:57.920 ], 00:15:57.920 "allow_any_host": true, 00:15:57.920 "hosts": [], 00:15:57.920 "serial_number": "SPDK2", 00:15:57.920 "model_number": "SPDK bdev Controller", 00:15:57.920 "max_namespaces": 32, 00:15:57.920 "min_cntlid": 1, 00:15:57.920 "max_cntlid": 65519, 00:15:57.920 "namespaces": [ 00:15:57.920 { 00:15:57.920 "nsid": 1, 00:15:57.920 "bdev_name": "Malloc2", 00:15:57.920 "name": "Malloc2", 00:15:57.920 "nguid": "9C15DF385A4B436BB3F837564FE1D4B7", 00:15:57.920 "uuid": "9c15df38-5a4b-436b-b3f8-37564fe1d4b7" 00:15:57.920 } 00:15:57.920 ] 00:15:57.920 } 00:15:57.920 ] 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=405926 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:57.920 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:57.920 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.178 [2024-07-23 03:15:24.580835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.179 Malloc4 00:15:58.179 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:58.436 [2024-07-23 03:15:24.952606] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.437 03:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:58.437 Asynchronous Event Request test 00:15:58.437 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.437 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.437 Registering asynchronous event callbacks... 00:15:58.437 Starting namespace attribute notice tests for all controllers... 00:15:58.437 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:58.437 aer_cb - Changed Namespace 00:15:58.437 Cleaning up... 00:15:58.695 [ 00:15:58.695 { 00:15:58.695 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:58.695 "subtype": "Discovery", 00:15:58.695 "listen_addresses": [], 00:15:58.695 "allow_any_host": true, 00:15:58.695 "hosts": [] 00:15:58.695 }, 00:15:58.695 { 00:15:58.695 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:58.695 "subtype": "NVMe", 00:15:58.695 "listen_addresses": [ 00:15:58.695 { 00:15:58.695 "trtype": "VFIOUSER", 00:15:58.695 "adrfam": "IPv4", 00:15:58.695 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:58.695 "trsvcid": "0" 00:15:58.695 } 00:15:58.695 ], 00:15:58.695 "allow_any_host": true, 00:15:58.695 "hosts": [], 00:15:58.695 "serial_number": "SPDK1", 00:15:58.695 "model_number": "SPDK bdev Controller", 00:15:58.695 "max_namespaces": 32, 00:15:58.695 "min_cntlid": 1, 00:15:58.695 "max_cntlid": 65519, 00:15:58.695 "namespaces": [ 00:15:58.695 { 00:15:58.695 "nsid": 1, 00:15:58.695 "bdev_name": "Malloc1", 00:15:58.695 "name": "Malloc1", 00:15:58.695 "nguid": "C346F5D968CD45C49445E9C7D80BC7D0", 00:15:58.695 "uuid": "c346f5d9-68cd-45c4-9445-e9c7d80bc7d0" 00:15:58.695 }, 00:15:58.695 { 00:15:58.695 "nsid": 2, 00:15:58.695 "bdev_name": "Malloc3", 00:15:58.695 "name": "Malloc3", 00:15:58.695 "nguid": "D33FCCDCB1414258B4EBA3A11458D232", 00:15:58.695 "uuid": "d33fccdc-b141-4258-b4eb-a3a11458d232" 00:15:58.695 } 00:15:58.695 ] 00:15:58.695 }, 00:15:58.695 { 00:15:58.695 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:58.695 "subtype": "NVMe", 00:15:58.695 "listen_addresses": [ 00:15:58.695 { 00:15:58.695 "trtype": "VFIOUSER", 00:15:58.695 "adrfam": "IPv4", 00:15:58.695 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:58.695 "trsvcid": "0" 00:15:58.695 } 00:15:58.695 ], 00:15:58.695 "allow_any_host": true, 00:15:58.695 "hosts": [], 00:15:58.695 "serial_number": "SPDK2", 00:15:58.695 "model_number": "SPDK bdev Controller", 00:15:58.695 "max_namespaces": 32, 00:15:58.695 "min_cntlid": 1, 00:15:58.695 "max_cntlid": 65519, 00:15:58.695 "namespaces": [ 00:15:58.695 { 00:15:58.695 "nsid": 1, 00:15:58.695 "bdev_name": "Malloc2", 00:15:58.695 "name": "Malloc2", 00:15:58.695 "nguid": "9C15DF385A4B436BB3F837564FE1D4B7", 00:15:58.695 "uuid": "9c15df38-5a4b-436b-b3f8-37564fe1d4b7" 00:15:58.695 }, 00:15:58.695 { 00:15:58.695 "nsid": 2, 00:15:58.695 "bdev_name": "Malloc4", 00:15:58.695 "name": "Malloc4", 00:15:58.695 "nguid": "F3353C21C0544AF9B509EF30580CBB06", 00:15:58.695 "uuid": "f3353c21-c054-4af9-b509-ef30580cbb06" 00:15:58.695 } 00:15:58.695 ] 00:15:58.695 } 00:15:58.695 ] 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 405926 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 399819 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 399819 ']' 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 399819 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 399819 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 399819' 00:15:58.695 killing process with pid 399819 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 399819 00:15:58.695 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 399819 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=406067 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 406067' 00:15:59.261 Process pid: 406067 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 406067 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 406067 ']' 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:59.261 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:59.261 [2024-07-23 03:15:25.584952] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:59.261 [2024-07-23 03:15:25.585940] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:59.262 [2024-07-23 03:15:25.585998] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.262 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.262 [2024-07-23 03:15:25.650072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.262 [2024-07-23 03:15:25.737934] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.262 [2024-07-23 03:15:25.738001] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.262 [2024-07-23 03:15:25.738029] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.262 [2024-07-23 03:15:25.738041] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.262 [2024-07-23 03:15:25.738051] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.262 [2024-07-23 03:15:25.738104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.262 [2024-07-23 03:15:25.738129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.262 [2024-07-23 03:15:25.738188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.262 [2024-07-23 03:15:25.738190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.519 [2024-07-23 03:15:25.843262] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:59.519 [2024-07-23 03:15:25.843508] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:59.519 [2024-07-23 03:15:25.843839] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:59.519 [2024-07-23 03:15:25.844434] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:59.519 [2024-07-23 03:15:25.844701] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:59.519 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:59.519 03:15:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:59.519 03:15:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:00.452 03:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:00.710 03:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:00.710 03:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:00.710 03:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:00.710 03:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:00.710 03:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:00.969 Malloc1 00:16:00.969 03:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:01.228 03:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:01.485 03:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:01.743 03:15:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:01.743 03:15:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:01.743 03:15:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:02.000 Malloc2 00:16:02.000 03:15:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:02.257 03:15:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:02.514 03:15:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 406067 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 406067 ']' 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 406067 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 406067 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 406067' 00:16:02.772 killing process with pid 406067 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 406067 00:16:02.772 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 406067 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:03.030 00:16:03.030 real 0m52.296s 00:16:03.030 user 3m26.710s 00:16:03.030 sys 0m4.233s 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:03.030 ************************************ 00:16:03.030 END TEST nvmf_vfio_user 00:16:03.030 ************************************ 00:16:03.030 03:15:29 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:03.030 03:15:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:03.030 03:15:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:03.030 03:15:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:03.030 ************************************ 00:16:03.030 START TEST nvmf_vfio_user_nvme_compliance 00:16:03.030 ************************************ 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:03.030 * Looking for test storage... 00:16:03.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.030 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=406664 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 406664' 00:16:03.031 Process pid: 406664 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 406664 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 406664 ']' 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:03.031 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 [2024-07-23 03:15:29.642340] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:03.289 [2024-07-23 03:15:29.642427] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.289 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.289 [2024-07-23 03:15:29.713601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:03.289 [2024-07-23 03:15:29.805716] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.289 [2024-07-23 03:15:29.805775] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.289 [2024-07-23 03:15:29.805792] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.289 [2024-07-23 03:15:29.805806] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.289 [2024-07-23 03:15:29.805818] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.289 [2024-07-23 03:15:29.805890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.289 [2024-07-23 03:15:29.805943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.289 [2024-07-23 03:15:29.805962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.547 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:03.547 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:03.547 03:15:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 malloc0 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.479 03:15:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:04.479 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.737 00:16:04.737 00:16:04.737 CUnit - A unit testing framework for C - Version 2.1-3 00:16:04.737 http://cunit.sourceforge.net/ 00:16:04.737 00:16:04.737 00:16:04.737 Suite: nvme_compliance 00:16:04.737 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-23 03:15:31.152123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.737 [2024-07-23 03:15:31.153561] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:04.737 [2024-07-23 03:15:31.153584] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:04.737 [2024-07-23 03:15:31.153611] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:04.737 [2024-07-23 03:15:31.155150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.737 passed 00:16:04.737 Test: admin_identify_ctrlr_verify_fused ...[2024-07-23 03:15:31.240761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.737 [2024-07-23 03:15:31.243775] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.737 passed 00:16:04.994 Test: admin_identify_ns ...[2024-07-23 03:15:31.331397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.994 [2024-07-23 03:15:31.391661] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:04.994 [2024-07-23 03:15:31.399646] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:04.994 [2024-07-23 03:15:31.420758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.994 passed 00:16:04.994 Test: admin_get_features_mandatory_features ...[2024-07-23 03:15:31.504441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.994 [2024-07-23 03:15:31.507466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.994 passed 00:16:05.252 Test: admin_get_features_optional_features ...[2024-07-23 03:15:31.590049] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.252 [2024-07-23 03:15:31.594074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.252 passed 00:16:05.252 Test: admin_set_features_number_of_queues ...[2024-07-23 03:15:31.676155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.252 [2024-07-23 03:15:31.783747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.252 passed 00:16:05.510 Test: admin_get_log_page_mandatory_logs ...[2024-07-23 03:15:31.864390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.510 [2024-07-23 03:15:31.869418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.510 passed 00:16:05.510 Test: admin_get_log_page_with_lpo ...[2024-07-23 03:15:31.952160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.510 [2024-07-23 03:15:32.019632] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:05.510 [2024-07-23 03:15:32.032709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.510 passed 00:16:05.767 Test: fabric_property_get ...[2024-07-23 03:15:32.114936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.767 [2024-07-23 03:15:32.116184] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:05.767 [2024-07-23 03:15:32.117943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.767 passed 00:16:05.767 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-23 03:15:32.202465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.767 [2024-07-23 03:15:32.203771] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:05.767 [2024-07-23 03:15:32.208505] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.767 passed 00:16:05.767 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-23 03:15:32.289711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.025 [2024-07-23 03:15:32.375623] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:06.025 [2024-07-23 03:15:32.391655] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:06.025 [2024-07-23 03:15:32.396737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.025 passed 00:16:06.025 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-23 03:15:32.481481] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.025 [2024-07-23 03:15:32.482788] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:06.025 [2024-07-23 03:15:32.484500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.025 passed 00:16:06.025 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-23 03:15:32.565655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.282 [2024-07-23 03:15:32.643639] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:06.282 [2024-07-23 03:15:32.667637] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:06.282 [2024-07-23 03:15:32.672733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.282 passed 00:16:06.282 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-23 03:15:32.756463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.282 [2024-07-23 03:15:32.757763] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:06.282 [2024-07-23 03:15:32.757804] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:06.282 [2024-07-23 03:15:32.759481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.282 passed 00:16:06.282 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-23 03:15:32.842726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.547 [2024-07-23 03:15:32.932628] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:06.547 [2024-07-23 03:15:32.940642] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:06.547 [2024-07-23 03:15:32.948621] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:06.547 [2024-07-23 03:15:32.956624] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:06.547 [2024-07-23 03:15:32.985731] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.547 passed 00:16:06.547 Test: admin_create_io_sq_verify_pc ...[2024-07-23 03:15:33.070961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.548 [2024-07-23 03:15:33.087653] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:06.548 [2024-07-23 03:15:33.105323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.845 passed 00:16:06.845 Test: admin_create_io_qp_max_qps ...[2024-07-23 03:15:33.189890] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.777 [2024-07-23 03:15:34.302629] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:08.342 [2024-07-23 03:15:34.699004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.342 passed 00:16:08.342 Test: admin_create_io_sq_shared_cq ...[2024-07-23 03:15:34.780382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.342 [2024-07-23 03:15:34.915653] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:08.599 [2024-07-23 03:15:34.952726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.600 passed 00:16:08.600 00:16:08.600 Run Summary: Type Total Ran Passed Failed Inactive 00:16:08.600 suites 1 1 n/a 0 0 00:16:08.600 tests 18 18 18 0 0 00:16:08.600 asserts 360 360 360 0 n/a 00:16:08.600 00:16:08.600 Elapsed time = 1.577 seconds 00:16:08.600 03:15:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 406664 00:16:08.600 03:15:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 406664 ']' 00:16:08.600 03:15:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 406664 00:16:08.600 03:15:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:16:08.600 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:08.600 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 406664 00:16:08.600 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:08.600 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:08.600 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 406664' 00:16:08.600 killing process with pid 406664 00:16:08.600 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 406664 00:16:08.600 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 406664 00:16:08.858 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:08.858 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:08.858 00:16:08.858 real 0m5.756s 00:16:08.858 user 0m16.120s 00:16:08.858 sys 0m0.555s 00:16:08.858 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:08.858 03:15:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:08.858 ************************************ 00:16:08.858 END TEST nvmf_vfio_user_nvme_compliance 00:16:08.858 ************************************ 00:16:08.858 03:15:35 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:08.858 03:15:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:08.858 03:15:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:08.858 03:15:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.858 ************************************ 00:16:08.858 START TEST nvmf_vfio_user_fuzz 00:16:08.858 ************************************ 00:16:08.858 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:08.858 * Looking for test storage... 00:16:08.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.858 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.858 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=407385 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 407385' 00:16:08.859 Process pid: 407385 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 407385 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 407385 ']' 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:08.859 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.426 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:09.426 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:16:09.426 03:15:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.362 malloc0 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:10.362 03:15:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:42.433 Fuzzing completed. Shutting down the fuzz application 00:16:42.433 00:16:42.433 Dumping successful admin opcodes: 00:16:42.433 8, 9, 10, 24, 00:16:42.433 Dumping successful io opcodes: 00:16:42.433 0, 00:16:42.433 NS: 0x200003a1ef00 I/O qp, Total commands completed: 549068, total successful commands: 2108, random_seed: 3631242560 00:16:42.433 NS: 0x200003a1ef00 admin qp, Total commands completed: 133233, total successful commands: 1081, random_seed: 2620411584 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 407385 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 407385 ']' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 407385 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 407385 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 407385' 00:16:42.434 killing process with pid 407385 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 407385 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 407385 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:42.434 00:16:42.434 real 0m32.242s 00:16:42.434 user 0m31.243s 00:16:42.434 sys 0m28.529s 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:42.434 03:16:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.434 ************************************ 00:16:42.434 END TEST nvmf_vfio_user_fuzz 00:16:42.434 ************************************ 00:16:42.434 03:16:07 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:42.434 03:16:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:42.434 03:16:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:42.434 03:16:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.434 ************************************ 00:16:42.434 START TEST nvmf_host_management 00:16:42.434 ************************************ 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:42.434 * Looking for test storage... 00:16:42.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.434 03:16:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:43.370 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:43.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:43.371 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:43.371 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:43.371 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:16:43.371 00:16:43.371 --- 10.0.0.2 ping statistics --- 00:16:43.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.371 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:16:43.371 00:16:43.371 --- 10.0.0.1 ping statistics --- 00:16:43.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.371 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.371 03:16:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=412828 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 412828 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 412828 ']' 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:43.372 03:16:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.372 [2024-07-23 03:16:09.839271] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:43.372 [2024-07-23 03:16:09.839346] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.372 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.372 [2024-07-23 03:16:09.903119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.629 [2024-07-23 03:16:09.989840] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.629 [2024-07-23 03:16:09.989916] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.629 [2024-07-23 03:16:09.989929] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.629 [2024-07-23 03:16:09.989944] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.629 [2024-07-23 03:16:09.989953] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.629 [2024-07-23 03:16:09.990055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.629 [2024-07-23 03:16:09.990113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.629 [2024-07-23 03:16:09.990189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:43.629 [2024-07-23 03:16:09.990192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.629 [2024-07-23 03:16:10.133274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.629 Malloc0 00:16:43.629 [2024-07-23 03:16:10.192010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.629 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=412871 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 412871 /var/tmp/bdevperf.sock 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 412871 ']' 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:43.887 { 00:16:43.887 "params": { 00:16:43.887 "name": "Nvme$subsystem", 00:16:43.887 "trtype": "$TEST_TRANSPORT", 00:16:43.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:43.887 "adrfam": "ipv4", 00:16:43.887 "trsvcid": "$NVMF_PORT", 00:16:43.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:43.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:43.887 "hdgst": ${hdgst:-false}, 00:16:43.887 "ddgst": ${ddgst:-false} 00:16:43.887 }, 00:16:43.887 "method": "bdev_nvme_attach_controller" 00:16:43.887 } 00:16:43.887 EOF 00:16:43.887 )") 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:43.887 03:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:43.887 "params": { 00:16:43.887 "name": "Nvme0", 00:16:43.887 "trtype": "tcp", 00:16:43.887 "traddr": "10.0.0.2", 00:16:43.887 "adrfam": "ipv4", 00:16:43.887 "trsvcid": "4420", 00:16:43.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:43.887 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:43.887 "hdgst": false, 00:16:43.887 "ddgst": false 00:16:43.887 }, 00:16:43.887 "method": "bdev_nvme_attach_controller" 00:16:43.887 }' 00:16:43.887 [2024-07-23 03:16:10.270260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:43.887 [2024-07-23 03:16:10.270332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412871 ] 00:16:43.887 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.887 [2024-07-23 03:16:10.332514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.887 [2024-07-23 03:16:10.419070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.144 Running I/O for 10 seconds... 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:44.402 03:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=462 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 462 -ge 100 ']' 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.662 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.662 [2024-07-23 03:16:11.127204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.127994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.128007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.662 [2024-07-23 03:16:11.128019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.663 [2024-07-23 03:16:11.128032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.663 [2024-07-23 03:16:11.128045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.663 [2024-07-23 03:16:11.128058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.663 [2024-07-23 03:16:11.128071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.663 [2024-07-23 03:16:11.128083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.663 [2024-07-23 03:16:11.128096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.663 [2024-07-23 03:16:11.128109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58120 is same with the state(5) to be set 00:16:44.663 [2024-07-23 03:16:11.128261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.128976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.128995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.663 [2024-07-23 03:16:11.129371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.663 [2024-07-23 03:16:11.129386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.129962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.129978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.664 [2024-07-23 03:16:11.130296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.664 [2024-07-23 03:16:11.130310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1372110 is same with the state(5) to be set 00:16:44.664 [2024-07-23 03:16:11.130391] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1372110 was disconnected and freed. reset controller. 00:16:44.664 [2024-07-23 03:16:11.131656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.664 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.664 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:44.664 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.664 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.664 task offset: 65536 on job bdev=Nvme0n1 fails 00:16:44.664 00:16:44.664 Latency(us) 00:16:44.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.664 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.664 Job: Nvme0n1 ended in about 0.41 seconds with error 00:16:44.664 Verification LBA range: start 0x0 length 0x400 00:16:44.664 Nvme0n1 : 0.41 1243.34 77.71 155.42 0.00 44508.31 6553.60 38253.61 00:16:44.664 =================================================================================================================== 00:16:44.664 Total : 1243.34 77.71 155.42 0.00 44508.31 6553.60 38253.61 00:16:44.664 [2024-07-23 03:16:11.133818] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:44.664 [2024-07-23 03:16:11.133853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf611e0 (9): Bad file descriptor 00:16:44.664 03:16:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.664 03:16:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:44.664 [2024-07-23 03:16:11.188459] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 412871 00:16:45.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (412871) - No such process 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:45.596 03:16:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:45.596 { 00:16:45.596 "params": { 00:16:45.596 "name": "Nvme$subsystem", 00:16:45.596 "trtype": "$TEST_TRANSPORT", 00:16:45.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:45.596 "adrfam": "ipv4", 00:16:45.596 "trsvcid": "$NVMF_PORT", 00:16:45.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:45.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:45.596 "hdgst": ${hdgst:-false}, 00:16:45.596 "ddgst": ${ddgst:-false} 00:16:45.596 }, 00:16:45.596 "method": "bdev_nvme_attach_controller" 00:16:45.596 } 00:16:45.596 EOF 00:16:45.597 )") 00:16:45.597 03:16:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:45.597 03:16:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:45.597 03:16:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:45.597 03:16:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:45.597 "params": { 00:16:45.597 "name": "Nvme0", 00:16:45.597 "trtype": "tcp", 00:16:45.597 "traddr": "10.0.0.2", 00:16:45.597 "adrfam": "ipv4", 00:16:45.597 "trsvcid": "4420", 00:16:45.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:45.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:45.597 "hdgst": false, 00:16:45.597 "ddgst": false 00:16:45.597 }, 00:16:45.597 "method": "bdev_nvme_attach_controller" 00:16:45.597 }' 00:16:45.854 [2024-07-23 03:16:12.188321] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:45.854 [2024-07-23 03:16:12.188399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413148 ] 00:16:45.854 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.854 [2024-07-23 03:16:12.249234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.854 [2024-07-23 03:16:12.337700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.111 Running I/O for 1 seconds... 00:16:47.482 00:16:47.482 Latency(us) 00:16:47.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.482 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.482 Verification LBA range: start 0x0 length 0x400 00:16:47.482 Nvme0n1 : 1.03 1371.73 85.73 0.00 0.00 45956.55 11213.94 37282.70 00:16:47.482 =================================================================================================================== 00:16:47.482 Total : 1371.73 85.73 0.00 0.00 45956.55 11213.94 37282.70 00:16:47.482 03:16:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:47.482 03:16:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:47.482 03:16:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:47.482 03:16:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:47.482 03:16:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.483 rmmod nvme_tcp 00:16:47.483 rmmod nvme_fabrics 00:16:47.483 rmmod nvme_keyring 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 412828 ']' 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 412828 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 412828 ']' 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 412828 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:47.483 03:16:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 412828 00:16:47.483 03:16:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:47.483 03:16:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:47.483 03:16:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 412828' 00:16:47.483 killing process with pid 412828 00:16:47.483 03:16:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 412828 00:16:47.483 03:16:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 412828 00:16:47.741 [2024-07-23 03:16:14.242554] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:47.741 03:16:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.741 03:16:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.741 03:16:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.741 03:16:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.741 03:16:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.741 03:16:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.741 03:16:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.741 03:16:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.314 03:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.314 03:16:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:50.314 00:16:50.314 real 0m8.682s 00:16:50.314 user 0m19.432s 00:16:50.314 sys 0m2.730s 00:16:50.314 03:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:50.314 03:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:50.314 ************************************ 00:16:50.314 END TEST nvmf_host_management 00:16:50.314 ************************************ 00:16:50.314 03:16:16 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:50.314 03:16:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:50.314 03:16:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:50.314 03:16:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.314 ************************************ 00:16:50.314 START TEST nvmf_lvol 00:16:50.314 ************************************ 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:50.314 * Looking for test storage... 00:16:50.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.314 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.315 03:16:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:52.219 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:52.219 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:52.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.219 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:52.220 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:16:52.220 00:16:52.220 --- 10.0.0.2 ping statistics --- 00:16:52.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.220 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:16:52.220 00:16:52.220 --- 10.0.0.1 ping statistics --- 00:16:52.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.220 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=415345 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 415345 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 415345 ']' 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.220 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 [2024-07-23 03:16:18.668061] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:52.220 [2024-07-23 03:16:18.668145] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.220 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.220 [2024-07-23 03:16:18.737725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.479 [2024-07-23 03:16:18.830159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.479 [2024-07-23 03:16:18.830235] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.479 [2024-07-23 03:16:18.830249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.479 [2024-07-23 03:16:18.830260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.479 [2024-07-23 03:16:18.830269] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.479 [2024-07-23 03:16:18.830423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.479 [2024-07-23 03:16:18.830448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.479 [2024-07-23 03:16:18.830451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.479 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:52.479 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:52.479 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.479 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.479 03:16:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:52.479 03:16:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.479 03:16:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:52.737 [2024-07-23 03:16:19.202480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.737 03:16:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:52.995 03:16:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:52.995 03:16:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:53.253 03:16:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:53.253 03:16:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:53.511 03:16:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:53.770 03:16:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2cc0b7db-5c35-49ca-af83-d7ed81c229d6 00:16:53.770 03:16:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2cc0b7db-5c35-49ca-af83-d7ed81c229d6 lvol 20 00:16:54.027 03:16:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=27c33564-f90a-4024-9514-8f3ba55547d1 00:16:54.028 03:16:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:54.285 03:16:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27c33564-f90a-4024-9514-8f3ba55547d1 00:16:54.543 03:16:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:54.801 [2024-07-23 03:16:21.250032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.801 03:16:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:55.059 03:16:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=415653 00:16:55.059 03:16:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:55.059 03:16:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:55.059 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.993 03:16:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 27c33564-f90a-4024-9514-8f3ba55547d1 MY_SNAPSHOT 00:16:56.252 03:16:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=37b6515b-a32b-42a2-8a06-53fd805cd409 00:16:56.252 03:16:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 27c33564-f90a-4024-9514-8f3ba55547d1 30 00:16:56.818 03:16:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 37b6515b-a32b-42a2-8a06-53fd805cd409 MY_CLONE 00:16:56.818 03:16:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c47434c5-c8af-43a9-a287-5548a72b58a4 00:16:56.818 03:16:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c47434c5-c8af-43a9-a287-5548a72b58a4 00:16:57.752 03:16:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 415653 00:17:05.862 Initializing NVMe Controllers 00:17:05.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:05.862 Controller IO queue size 128, less than required. 00:17:05.862 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:05.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:05.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:05.862 Initialization complete. Launching workers. 00:17:05.862 ======================================================== 00:17:05.862 Latency(us) 00:17:05.862 Device Information : IOPS MiB/s Average min max 00:17:05.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10383.20 40.56 12332.53 779.68 76894.53 00:17:05.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10358.10 40.46 12361.53 3180.93 57664.11 00:17:05.862 ======================================================== 00:17:05.862 Total : 20741.30 81.02 12347.01 779.68 76894.53 00:17:05.862 00:17:05.862 03:16:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:05.862 03:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 27c33564-f90a-4024-9514-8f3ba55547d1 00:17:06.119 03:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2cc0b7db-5c35-49ca-af83-d7ed81c229d6 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.377 rmmod nvme_tcp 00:17:06.377 rmmod nvme_fabrics 00:17:06.377 rmmod nvme_keyring 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 415345 ']' 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 415345 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 415345 ']' 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 415345 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 415345 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 415345' 00:17:06.377 killing process with pid 415345 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 415345 00:17:06.377 03:16:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 415345 00:17:06.636 03:16:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.636 03:16:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.636 03:16:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.636 03:16:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.636 03:16:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.636 03:16:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.636 03:16:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.636 03:16:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:09.171 00:17:09.171 real 0m18.862s 00:17:09.171 user 1m2.560s 00:17:09.171 sys 0m6.148s 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:09.171 ************************************ 00:17:09.171 END TEST nvmf_lvol 00:17:09.171 ************************************ 00:17:09.171 03:16:35 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:09.171 03:16:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:09.171 03:16:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:09.171 03:16:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.171 ************************************ 00:17:09.171 START TEST nvmf_lvs_grow 00:17:09.171 ************************************ 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:09.171 * Looking for test storage... 00:17:09.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.171 03:16:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:11.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:11.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:11.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:11.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.073 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:11.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:11.073 00:17:11.073 --- 10.0.0.2 ping statistics --- 00:17:11.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.073 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:17:11.074 00:17:11.074 --- 10.0.0.1 ping statistics --- 00:17:11.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.074 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=418905 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 418905 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 418905 ']' 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:11.074 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.074 [2024-07-23 03:16:37.491926] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:11.074 [2024-07-23 03:16:37.492001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.074 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.074 [2024-07-23 03:16:37.560976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.331 [2024-07-23 03:16:37.654095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.331 [2024-07-23 03:16:37.654158] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.331 [2024-07-23 03:16:37.654175] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.331 [2024-07-23 03:16:37.654188] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.331 [2024-07-23 03:16:37.654200] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.331 [2024-07-23 03:16:37.654246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.331 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:11.331 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:11.331 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:11.331 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.331 03:16:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.331 03:16:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.331 03:16:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:11.596 [2024-07-23 03:16:38.010085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 ************************************ 00:17:11.596 START TEST lvs_grow_clean 00:17:11.596 ************************************ 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:11.596 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:11.894 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:11.894 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:12.152 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:12.152 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:12.152 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:12.410 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:12.410 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:12.410 03:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df lvol 150 00:17:12.667 03:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=40c61df3-581a-48f2-a2de-56fdb2f6fd35 00:17:12.667 03:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:12.667 03:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:12.925 [2024-07-23 03:16:39.303836] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:12.925 [2024-07-23 03:16:39.303945] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:12.925 true 00:17:12.925 03:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:12.925 03:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:13.182 03:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:13.182 03:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:13.441 03:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40c61df3-581a-48f2-a2de-56fdb2f6fd35 00:17:13.699 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:13.956 [2024-07-23 03:16:40.343026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.956 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=419341 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 419341 /var/tmp/bdevperf.sock 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 419341 ']' 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:14.214 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:14.214 [2024-07-23 03:16:40.648765] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:14.214 [2024-07-23 03:16:40.648843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419341 ] 00:17:14.214 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.214 [2024-07-23 03:16:40.708299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.472 [2024-07-23 03:16:40.794732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.472 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:14.472 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:14.472 03:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:15.036 Nvme0n1 00:17:15.036 03:16:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:15.295 [ 00:17:15.295 { 00:17:15.295 "name": "Nvme0n1", 00:17:15.295 "aliases": [ 00:17:15.295 "40c61df3-581a-48f2-a2de-56fdb2f6fd35" 00:17:15.295 ], 00:17:15.295 "product_name": "NVMe disk", 00:17:15.295 "block_size": 4096, 00:17:15.295 "num_blocks": 38912, 00:17:15.295 "uuid": "40c61df3-581a-48f2-a2de-56fdb2f6fd35", 00:17:15.295 "assigned_rate_limits": { 00:17:15.295 "rw_ios_per_sec": 0, 00:17:15.295 "rw_mbytes_per_sec": 0, 00:17:15.295 "r_mbytes_per_sec": 0, 00:17:15.295 "w_mbytes_per_sec": 0 00:17:15.295 }, 00:17:15.295 "claimed": false, 00:17:15.295 "zoned": false, 00:17:15.295 "supported_io_types": { 00:17:15.295 "read": true, 00:17:15.295 "write": true, 00:17:15.295 "unmap": true, 00:17:15.295 "write_zeroes": true, 00:17:15.295 "flush": true, 00:17:15.295 "reset": true, 00:17:15.295 "compare": true, 00:17:15.295 "compare_and_write": true, 00:17:15.295 "abort": true, 00:17:15.295 "nvme_admin": true, 00:17:15.295 "nvme_io": true 00:17:15.295 }, 00:17:15.295 "memory_domains": [ 00:17:15.295 { 00:17:15.295 "dma_device_id": "system", 00:17:15.295 "dma_device_type": 1 00:17:15.295 } 00:17:15.295 ], 00:17:15.295 "driver_specific": { 00:17:15.295 "nvme": [ 00:17:15.295 { 00:17:15.295 "trid": { 00:17:15.295 "trtype": "TCP", 00:17:15.295 "adrfam": "IPv4", 00:17:15.295 "traddr": "10.0.0.2", 00:17:15.295 "trsvcid": "4420", 00:17:15.295 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:15.295 }, 00:17:15.295 "ctrlr_data": { 00:17:15.295 "cntlid": 1, 00:17:15.295 "vendor_id": "0x8086", 00:17:15.295 "model_number": "SPDK bdev Controller", 00:17:15.295 "serial_number": "SPDK0", 00:17:15.295 "firmware_revision": "24.05.1", 00:17:15.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:15.295 "oacs": { 00:17:15.295 "security": 0, 00:17:15.295 "format": 0, 00:17:15.295 "firmware": 0, 00:17:15.295 "ns_manage": 0 00:17:15.295 }, 00:17:15.295 "multi_ctrlr": true, 00:17:15.295 "ana_reporting": false 00:17:15.295 }, 00:17:15.295 "vs": { 00:17:15.295 "nvme_version": "1.3" 00:17:15.295 }, 00:17:15.295 "ns_data": { 00:17:15.295 "id": 1, 00:17:15.295 "can_share": true 00:17:15.295 } 00:17:15.295 } 00:17:15.295 ], 00:17:15.295 "mp_policy": "active_passive" 00:17:15.295 } 00:17:15.295 } 00:17:15.295 ] 00:17:15.295 03:16:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=419479 00:17:15.295 03:16:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:15.295 03:16:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.295 Running I/O for 10 seconds... 00:17:16.230 Latency(us) 00:17:16.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.230 Nvme0n1 : 1.00 14589.00 56.99 0.00 0.00 0.00 0.00 0.00 00:17:16.230 =================================================================================================================== 00:17:16.230 Total : 14589.00 56.99 0.00 0.00 0.00 0.00 0.00 00:17:16.230 00:17:17.163 03:16:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:17.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.421 Nvme0n1 : 2.00 14717.00 57.49 0.00 0.00 0.00 0.00 0.00 00:17:17.421 =================================================================================================================== 00:17:17.421 Total : 14717.00 57.49 0.00 0.00 0.00 0.00 0.00 00:17:17.421 00:17:17.421 true 00:17:17.421 03:16:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:17.421 03:16:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:17.679 03:16:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:17.679 03:16:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:17.679 03:16:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 419479 00:17:18.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.245 Nvme0n1 : 3.00 14802.00 57.82 0.00 0.00 0.00 0.00 0.00 00:17:18.245 =================================================================================================================== 00:17:18.245 Total : 14802.00 57.82 0.00 0.00 0.00 0.00 0.00 00:17:18.245 00:17:19.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.178 Nvme0n1 : 4.00 14923.75 58.30 0.00 0.00 0.00 0.00 0.00 00:17:19.178 =================================================================================================================== 00:17:19.178 Total : 14923.75 58.30 0.00 0.00 0.00 0.00 0.00 00:17:19.178 00:17:20.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.553 Nvme0n1 : 5.00 14972.60 58.49 0.00 0.00 0.00 0.00 0.00 00:17:20.553 =================================================================================================================== 00:17:20.553 Total : 14972.60 58.49 0.00 0.00 0.00 0.00 0.00 00:17:20.553 00:17:21.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.486 Nvme0n1 : 6.00 15026.50 58.70 0.00 0.00 0.00 0.00 0.00 00:17:21.486 =================================================================================================================== 00:17:21.486 Total : 15026.50 58.70 0.00 0.00 0.00 0.00 0.00 00:17:21.486 00:17:22.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.421 Nvme0n1 : 7.00 15082.86 58.92 0.00 0.00 0.00 0.00 0.00 00:17:22.421 =================================================================================================================== 00:17:22.421 Total : 15082.86 58.92 0.00 0.00 0.00 0.00 0.00 00:17:22.421 00:17:23.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.354 Nvme0n1 : 8.00 15133.12 59.11 0.00 0.00 0.00 0.00 0.00 00:17:23.354 =================================================================================================================== 00:17:23.354 Total : 15133.12 59.11 0.00 0.00 0.00 0.00 0.00 00:17:23.354 00:17:24.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.287 Nvme0n1 : 9.00 15179.22 59.29 0.00 0.00 0.00 0.00 0.00 00:17:24.288 =================================================================================================================== 00:17:24.288 Total : 15179.22 59.29 0.00 0.00 0.00 0.00 0.00 00:17:24.288 00:17:25.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.219 Nvme0n1 : 10.00 15197.20 59.36 0.00 0.00 0.00 0.00 0.00 00:17:25.219 =================================================================================================================== 00:17:25.219 Total : 15197.20 59.36 0.00 0.00 0.00 0.00 0.00 00:17:25.219 00:17:25.219 00:17:25.219 Latency(us) 00:17:25.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.219 Nvme0n1 : 10.01 15200.34 59.38 0.00 0.00 8415.39 2257.35 16408.27 00:17:25.219 =================================================================================================================== 00:17:25.219 Total : 15200.34 59.38 0.00 0.00 8415.39 2257.35 16408.27 00:17:25.219 0 00:17:25.219 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 419341 00:17:25.219 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 419341 ']' 00:17:25.219 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 419341 00:17:25.219 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:25.219 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.219 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 419341 00:17:25.477 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:25.477 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:25.477 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 419341' 00:17:25.477 killing process with pid 419341 00:17:25.477 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 419341 00:17:25.477 Received shutdown signal, test time was about 10.000000 seconds 00:17:25.477 00:17:25.477 Latency(us) 00:17:25.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.477 =================================================================================================================== 00:17:25.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.477 03:16:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 419341 00:17:25.477 03:16:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:26.042 03:16:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:26.299 03:16:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:26.299 03:16:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:26.299 03:16:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:26.299 03:16:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:26.299 03:16:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:26.557 [2024-07-23 03:16:53.103410] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:26.814 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:27.070 request: 00:17:27.070 { 00:17:27.070 "uuid": "bc374c0e-cbfe-4b37-a222-59ebe3dc47df", 00:17:27.070 "method": "bdev_lvol_get_lvstores", 00:17:27.070 "req_id": 1 00:17:27.070 } 00:17:27.070 Got JSON-RPC error response 00:17:27.070 response: 00:17:27.070 { 00:17:27.070 "code": -19, 00:17:27.070 "message": "No such device" 00:17:27.070 } 00:17:27.071 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:27.071 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:27.071 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:27.071 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:27.071 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:27.329 aio_bdev 00:17:27.329 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 40c61df3-581a-48f2-a2de-56fdb2f6fd35 00:17:27.329 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=40c61df3-581a-48f2-a2de-56fdb2f6fd35 00:17:27.329 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:27.329 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:27.329 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:27.329 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:27.329 03:16:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:27.657 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 40c61df3-581a-48f2-a2de-56fdb2f6fd35 -t 2000 00:17:27.916 [ 00:17:27.916 { 00:17:27.916 "name": "40c61df3-581a-48f2-a2de-56fdb2f6fd35", 00:17:27.916 "aliases": [ 00:17:27.916 "lvs/lvol" 00:17:27.916 ], 00:17:27.916 "product_name": "Logical Volume", 00:17:27.916 "block_size": 4096, 00:17:27.916 "num_blocks": 38912, 00:17:27.916 "uuid": "40c61df3-581a-48f2-a2de-56fdb2f6fd35", 00:17:27.916 "assigned_rate_limits": { 00:17:27.916 "rw_ios_per_sec": 0, 00:17:27.916 "rw_mbytes_per_sec": 0, 00:17:27.916 "r_mbytes_per_sec": 0, 00:17:27.916 "w_mbytes_per_sec": 0 00:17:27.916 }, 00:17:27.916 "claimed": false, 00:17:27.916 "zoned": false, 00:17:27.916 "supported_io_types": { 00:17:27.916 "read": true, 00:17:27.916 "write": true, 00:17:27.916 "unmap": true, 00:17:27.916 "write_zeroes": true, 00:17:27.916 "flush": false, 00:17:27.916 "reset": true, 00:17:27.916 "compare": false, 00:17:27.916 "compare_and_write": false, 00:17:27.916 "abort": false, 00:17:27.916 "nvme_admin": false, 00:17:27.916 "nvme_io": false 00:17:27.916 }, 00:17:27.916 "driver_specific": { 00:17:27.916 "lvol": { 00:17:27.916 "lvol_store_uuid": "bc374c0e-cbfe-4b37-a222-59ebe3dc47df", 00:17:27.916 "base_bdev": "aio_bdev", 00:17:27.916 "thin_provision": false, 00:17:27.916 "num_allocated_clusters": 38, 00:17:27.916 "snapshot": false, 00:17:27.916 "clone": false, 00:17:27.916 "esnap_clone": false 00:17:27.916 } 00:17:27.916 } 00:17:27.916 } 00:17:27.916 ] 00:17:27.916 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:27.916 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:27.916 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:28.174 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:28.174 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:28.174 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:28.432 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:28.432 03:16:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40c61df3-581a-48f2-a2de-56fdb2f6fd35 00:17:28.689 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc374c0e-cbfe-4b37-a222-59ebe3dc47df 00:17:28.947 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:29.205 00:17:29.205 real 0m17.554s 00:17:29.205 user 0m16.813s 00:17:29.205 sys 0m1.953s 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:29.205 ************************************ 00:17:29.205 END TEST lvs_grow_clean 00:17:29.205 ************************************ 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.205 ************************************ 00:17:29.205 START TEST lvs_grow_dirty 00:17:29.205 ************************************ 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:29.205 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:29.463 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:29.463 03:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:29.721 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:29.721 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:29.721 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:29.979 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:29.979 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:29.979 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 lvol 150 00:17:30.237 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=07efc374-1c0b-4a7e-95b2-23545add3d1d 00:17:30.237 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.237 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:30.495 [2024-07-23 03:16:56.956786] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:30.495 [2024-07-23 03:16:56.956871] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:30.495 true 00:17:30.495 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:30.495 03:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:30.754 03:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:30.754 03:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:31.012 03:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 07efc374-1c0b-4a7e-95b2-23545add3d1d 00:17:31.270 03:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:31.528 [2024-07-23 03:16:58.024068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.528 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=421511 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 421511 /var/tmp/bdevperf.sock 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 421511 ']' 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.786 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:32.044 [2024-07-23 03:16:58.377703] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:32.044 [2024-07-23 03:16:58.377775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid421511 ] 00:17:32.045 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.045 [2024-07-23 03:16:58.444705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.045 [2024-07-23 03:16:58.535998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.302 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.302 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:32.302 03:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:32.560 Nvme0n1 00:17:32.560 03:16:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:32.819 [ 00:17:32.819 { 00:17:32.819 "name": "Nvme0n1", 00:17:32.819 "aliases": [ 00:17:32.819 "07efc374-1c0b-4a7e-95b2-23545add3d1d" 00:17:32.819 ], 00:17:32.819 "product_name": "NVMe disk", 00:17:32.819 "block_size": 4096, 00:17:32.819 "num_blocks": 38912, 00:17:32.819 "uuid": "07efc374-1c0b-4a7e-95b2-23545add3d1d", 00:17:32.819 "assigned_rate_limits": { 00:17:32.819 "rw_ios_per_sec": 0, 00:17:32.819 "rw_mbytes_per_sec": 0, 00:17:32.819 "r_mbytes_per_sec": 0, 00:17:32.819 "w_mbytes_per_sec": 0 00:17:32.819 }, 00:17:32.819 "claimed": false, 00:17:32.819 "zoned": false, 00:17:32.819 "supported_io_types": { 00:17:32.819 "read": true, 00:17:32.819 "write": true, 00:17:32.819 "unmap": true, 00:17:32.819 "write_zeroes": true, 00:17:32.819 "flush": true, 00:17:32.819 "reset": true, 00:17:32.819 "compare": true, 00:17:32.819 "compare_and_write": true, 00:17:32.819 "abort": true, 00:17:32.819 "nvme_admin": true, 00:17:32.819 "nvme_io": true 00:17:32.819 }, 00:17:32.819 "memory_domains": [ 00:17:32.819 { 00:17:32.819 "dma_device_id": "system", 00:17:32.819 "dma_device_type": 1 00:17:32.819 } 00:17:32.819 ], 00:17:32.819 "driver_specific": { 00:17:32.819 "nvme": [ 00:17:32.819 { 00:17:32.819 "trid": { 00:17:32.819 "trtype": "TCP", 00:17:32.819 "adrfam": "IPv4", 00:17:32.819 "traddr": "10.0.0.2", 00:17:32.819 "trsvcid": "4420", 00:17:32.819 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:32.819 }, 00:17:32.819 "ctrlr_data": { 00:17:32.819 "cntlid": 1, 00:17:32.819 "vendor_id": "0x8086", 00:17:32.819 "model_number": "SPDK bdev Controller", 00:17:32.819 "serial_number": "SPDK0", 00:17:32.819 "firmware_revision": "24.05.1", 00:17:32.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:32.819 "oacs": { 00:17:32.819 "security": 0, 00:17:32.819 "format": 0, 00:17:32.819 "firmware": 0, 00:17:32.819 "ns_manage": 0 00:17:32.819 }, 00:17:32.819 "multi_ctrlr": true, 00:17:32.819 "ana_reporting": false 00:17:32.819 }, 00:17:32.819 "vs": { 00:17:32.819 "nvme_version": "1.3" 00:17:32.819 }, 00:17:32.819 "ns_data": { 00:17:32.819 "id": 1, 00:17:32.819 "can_share": true 00:17:32.819 } 00:17:32.819 } 00:17:32.819 ], 00:17:32.819 "mp_policy": "active_passive" 00:17:32.819 } 00:17:32.819 } 00:17:32.819 ] 00:17:32.819 03:16:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=421646 00:17:32.819 03:16:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:32.819 03:16:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:32.819 Running I/O for 10 seconds... 00:17:34.193 Latency(us) 00:17:34.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.193 Nvme0n1 : 1.00 13943.00 54.46 0.00 0.00 0.00 0.00 0.00 00:17:34.193 =================================================================================================================== 00:17:34.193 Total : 13943.00 54.46 0.00 0.00 0.00 0.00 0.00 00:17:34.193 00:17:34.759 03:17:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:35.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.017 Nvme0n1 : 2.00 14331.00 55.98 0.00 0.00 0.00 0.00 0.00 00:17:35.017 =================================================================================================================== 00:17:35.017 Total : 14331.00 55.98 0.00 0.00 0.00 0.00 0.00 00:17:35.017 00:17:35.017 true 00:17:35.275 03:17:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:35.275 03:17:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:35.275 03:17:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:35.275 03:17:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:35.275 03:17:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 421646 00:17:35.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.842 Nvme0n1 : 3.00 14353.33 56.07 0.00 0.00 0.00 0.00 0.00 00:17:35.842 =================================================================================================================== 00:17:35.842 Total : 14353.33 56.07 0.00 0.00 0.00 0.00 0.00 00:17:35.842 00:17:37.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.214 Nvme0n1 : 4.00 14396.50 56.24 0.00 0.00 0.00 0.00 0.00 00:17:37.214 =================================================================================================================== 00:17:37.214 Total : 14396.50 56.24 0.00 0.00 0.00 0.00 0.00 00:17:37.214 00:17:38.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.149 Nvme0n1 : 5.00 14435.60 56.39 0.00 0.00 0.00 0.00 0.00 00:17:38.149 =================================================================================================================== 00:17:38.149 Total : 14435.60 56.39 0.00 0.00 0.00 0.00 0.00 00:17:38.149 00:17:39.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.083 Nvme0n1 : 6.00 14472.17 56.53 0.00 0.00 0.00 0.00 0.00 00:17:39.083 =================================================================================================================== 00:17:39.084 Total : 14472.17 56.53 0.00 0.00 0.00 0.00 0.00 00:17:39.084 00:17:40.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.019 Nvme0n1 : 7.00 14489.29 56.60 0.00 0.00 0.00 0.00 0.00 00:17:40.019 =================================================================================================================== 00:17:40.019 Total : 14489.29 56.60 0.00 0.00 0.00 0.00 0.00 00:17:40.019 00:17:40.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.952 Nvme0n1 : 8.00 14518.12 56.71 0.00 0.00 0.00 0.00 0.00 00:17:40.953 =================================================================================================================== 00:17:40.953 Total : 14518.12 56.71 0.00 0.00 0.00 0.00 0.00 00:17:40.953 00:17:41.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.888 Nvme0n1 : 9.00 14540.56 56.80 0.00 0.00 0.00 0.00 0.00 00:17:41.888 =================================================================================================================== 00:17:41.888 Total : 14540.56 56.80 0.00 0.00 0.00 0.00 0.00 00:17:41.888 00:17:43.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.284 Nvme0n1 : 10.00 14552.10 56.84 0.00 0.00 0.00 0.00 0.00 00:17:43.284 =================================================================================================================== 00:17:43.284 Total : 14552.10 56.84 0.00 0.00 0.00 0.00 0.00 00:17:43.284 00:17:43.284 00:17:43.284 Latency(us) 00:17:43.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.284 Nvme0n1 : 10.01 14553.03 56.85 0.00 0.00 8789.64 5145.79 16505.36 00:17:43.284 =================================================================================================================== 00:17:43.284 Total : 14553.03 56.85 0.00 0.00 8789.64 5145.79 16505.36 00:17:43.284 0 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 421511 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 421511 ']' 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 421511 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 421511 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 421511' 00:17:43.284 killing process with pid 421511 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 421511 00:17:43.284 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.284 00:17:43.284 Latency(us) 00:17:43.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.284 =================================================================================================================== 00:17:43.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 421511 00:17:43.284 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:43.561 03:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:43.819 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:43.819 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 418905 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 418905 00:17:44.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 418905 Killed "${NVMF_APP[@]}" "$@" 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=422858 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 422858 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 422858 ']' 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:44.078 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:44.078 [2024-07-23 03:17:10.519255] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:44.078 [2024-07-23 03:17:10.519329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.078 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.078 [2024-07-23 03:17:10.587971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.336 [2024-07-23 03:17:10.672118] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.336 [2024-07-23 03:17:10.672167] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.336 [2024-07-23 03:17:10.672188] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.336 [2024-07-23 03:17:10.672213] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.336 [2024-07-23 03:17:10.672223] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.336 [2024-07-23 03:17:10.672247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.336 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:44.336 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:44.336 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.336 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.336 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:44.336 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.336 03:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:44.595 [2024-07-23 03:17:11.033459] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:44.595 [2024-07-23 03:17:11.033589] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:44.595 [2024-07-23 03:17:11.033668] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:44.595 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:44.595 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 07efc374-1c0b-4a7e-95b2-23545add3d1d 00:17:44.595 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=07efc374-1c0b-4a7e-95b2-23545add3d1d 00:17:44.595 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:44.595 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:44.595 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:44.595 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:44.595 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:44.853 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 07efc374-1c0b-4a7e-95b2-23545add3d1d -t 2000 00:17:45.110 [ 00:17:45.110 { 00:17:45.110 "name": "07efc374-1c0b-4a7e-95b2-23545add3d1d", 00:17:45.110 "aliases": [ 00:17:45.110 "lvs/lvol" 00:17:45.110 ], 00:17:45.110 "product_name": "Logical Volume", 00:17:45.110 "block_size": 4096, 00:17:45.110 "num_blocks": 38912, 00:17:45.110 "uuid": "07efc374-1c0b-4a7e-95b2-23545add3d1d", 00:17:45.110 "assigned_rate_limits": { 00:17:45.110 "rw_ios_per_sec": 0, 00:17:45.110 "rw_mbytes_per_sec": 0, 00:17:45.110 "r_mbytes_per_sec": 0, 00:17:45.110 "w_mbytes_per_sec": 0 00:17:45.110 }, 00:17:45.110 "claimed": false, 00:17:45.110 "zoned": false, 00:17:45.110 "supported_io_types": { 00:17:45.110 "read": true, 00:17:45.110 "write": true, 00:17:45.110 "unmap": true, 00:17:45.110 "write_zeroes": true, 00:17:45.110 "flush": false, 00:17:45.110 "reset": true, 00:17:45.110 "compare": false, 00:17:45.110 "compare_and_write": false, 00:17:45.110 "abort": false, 00:17:45.110 "nvme_admin": false, 00:17:45.110 "nvme_io": false 00:17:45.110 }, 00:17:45.110 "driver_specific": { 00:17:45.110 "lvol": { 00:17:45.110 "lvol_store_uuid": "a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4", 00:17:45.110 "base_bdev": "aio_bdev", 00:17:45.110 "thin_provision": false, 00:17:45.110 "num_allocated_clusters": 38, 00:17:45.110 "snapshot": false, 00:17:45.110 "clone": false, 00:17:45.110 "esnap_clone": false 00:17:45.110 } 00:17:45.110 } 00:17:45.110 } 00:17:45.110 ] 00:17:45.110 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:45.110 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:45.110 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:45.368 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:45.368 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:45.368 03:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:45.627 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:45.627 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:45.885 [2024-07-23 03:17:12.318686] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:45.885 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:46.143 request: 00:17:46.143 { 00:17:46.143 "uuid": "a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4", 00:17:46.144 "method": "bdev_lvol_get_lvstores", 00:17:46.144 "req_id": 1 00:17:46.144 } 00:17:46.144 Got JSON-RPC error response 00:17:46.144 response: 00:17:46.144 { 00:17:46.144 "code": -19, 00:17:46.144 "message": "No such device" 00:17:46.144 } 00:17:46.144 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:46.144 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:46.144 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:46.144 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:46.144 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:46.401 aio_bdev 00:17:46.401 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 07efc374-1c0b-4a7e-95b2-23545add3d1d 00:17:46.401 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=07efc374-1c0b-4a7e-95b2-23545add3d1d 00:17:46.401 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:46.401 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:46.401 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:46.401 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:46.401 03:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:46.659 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 07efc374-1c0b-4a7e-95b2-23545add3d1d -t 2000 00:17:46.916 [ 00:17:46.916 { 00:17:46.916 "name": "07efc374-1c0b-4a7e-95b2-23545add3d1d", 00:17:46.916 "aliases": [ 00:17:46.916 "lvs/lvol" 00:17:46.916 ], 00:17:46.916 "product_name": "Logical Volume", 00:17:46.916 "block_size": 4096, 00:17:46.916 "num_blocks": 38912, 00:17:46.916 "uuid": "07efc374-1c0b-4a7e-95b2-23545add3d1d", 00:17:46.916 "assigned_rate_limits": { 00:17:46.916 "rw_ios_per_sec": 0, 00:17:46.916 "rw_mbytes_per_sec": 0, 00:17:46.916 "r_mbytes_per_sec": 0, 00:17:46.916 "w_mbytes_per_sec": 0 00:17:46.916 }, 00:17:46.916 "claimed": false, 00:17:46.916 "zoned": false, 00:17:46.916 "supported_io_types": { 00:17:46.916 "read": true, 00:17:46.916 "write": true, 00:17:46.916 "unmap": true, 00:17:46.916 "write_zeroes": true, 00:17:46.916 "flush": false, 00:17:46.916 "reset": true, 00:17:46.916 "compare": false, 00:17:46.916 "compare_and_write": false, 00:17:46.916 "abort": false, 00:17:46.916 "nvme_admin": false, 00:17:46.916 "nvme_io": false 00:17:46.916 }, 00:17:46.916 "driver_specific": { 00:17:46.916 "lvol": { 00:17:46.916 "lvol_store_uuid": "a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4", 00:17:46.916 "base_bdev": "aio_bdev", 00:17:46.916 "thin_provision": false, 00:17:46.916 "num_allocated_clusters": 38, 00:17:46.916 "snapshot": false, 00:17:46.916 "clone": false, 00:17:46.916 "esnap_clone": false 00:17:46.916 } 00:17:46.916 } 00:17:46.916 } 00:17:46.916 ] 00:17:46.916 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:46.916 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:46.916 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:47.173 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:47.173 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:47.173 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:47.431 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:47.431 03:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 07efc374-1c0b-4a7e-95b2-23545add3d1d 00:17:47.689 03:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6b3ac89-a8cf-4c9c-8230-209d32f2ddf4 00:17:47.947 03:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:48.205 03:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:48.205 00:17:48.205 real 0m19.091s 00:17:48.205 user 0m48.055s 00:17:48.205 sys 0m4.963s 00:17:48.205 03:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:48.205 03:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:48.205 ************************************ 00:17:48.205 END TEST lvs_grow_dirty 00:17:48.205 ************************************ 00:17:48.205 03:17:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:48.205 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:48.205 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:48.205 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:48.463 nvmf_trace.0 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.463 rmmod nvme_tcp 00:17:48.463 rmmod nvme_fabrics 00:17:48.463 rmmod nvme_keyring 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 422858 ']' 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 422858 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 422858 ']' 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 422858 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 422858 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 422858' 00:17:48.463 killing process with pid 422858 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 422858 00:17:48.463 03:17:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 422858 00:17:48.723 03:17:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.723 03:17:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.723 03:17:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.723 03:17:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.723 03:17:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.723 03:17:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.723 03:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.723 03:17:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.630 03:17:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:50.630 00:17:50.630 real 0m41.897s 00:17:50.630 user 1m10.610s 00:17:50.630 sys 0m8.752s 00:17:50.630 03:17:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:50.630 03:17:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:50.630 ************************************ 00:17:50.630 END TEST nvmf_lvs_grow 00:17:50.630 ************************************ 00:17:50.630 03:17:17 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:50.630 03:17:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:50.630 03:17:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:50.630 03:17:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:50.888 ************************************ 00:17:50.888 START TEST nvmf_bdev_io_wait 00:17:50.888 ************************************ 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:50.888 * Looking for test storage... 00:17:50.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.888 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:50.889 03:17:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:52.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:52.792 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:52.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:52.792 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:52.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:17:52.792 00:17:52.792 --- 10.0.0.2 ping statistics --- 00:17:52.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.792 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:17:52.792 00:17:52.792 --- 10.0.0.1 ping statistics --- 00:17:52.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.792 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.792 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=425367 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 425367 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 425367 ']' 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:53.051 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.051 [2024-07-23 03:17:19.423039] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:53.051 [2024-07-23 03:17:19.423130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.051 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.051 [2024-07-23 03:17:19.496258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.051 [2024-07-23 03:17:19.592765] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.051 [2024-07-23 03:17:19.592815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.051 [2024-07-23 03:17:19.592840] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.051 [2024-07-23 03:17:19.592852] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.051 [2024-07-23 03:17:19.592863] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.051 [2024-07-23 03:17:19.592976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.051 [2024-07-23 03:17:19.593027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.051 [2024-07-23 03:17:19.593153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.051 [2024-07-23 03:17:19.593156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 [2024-07-23 03:17:19.741462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 Malloc0 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 [2024-07-23 03:17:19.813224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=425509 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=425510 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=425513 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:53.310 { 00:17:53.310 "params": { 00:17:53.310 "name": "Nvme$subsystem", 00:17:53.310 "trtype": "$TEST_TRANSPORT", 00:17:53.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.310 "adrfam": "ipv4", 00:17:53.310 "trsvcid": "$NVMF_PORT", 00:17:53.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.310 "hdgst": ${hdgst:-false}, 00:17:53.310 "ddgst": ${ddgst:-false} 00:17:53.310 }, 00:17:53.310 "method": "bdev_nvme_attach_controller" 00:17:53.310 } 00:17:53.310 EOF 00:17:53.310 )") 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:53.310 { 00:17:53.310 "params": { 00:17:53.310 "name": "Nvme$subsystem", 00:17:53.310 "trtype": "$TEST_TRANSPORT", 00:17:53.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.310 "adrfam": "ipv4", 00:17:53.310 "trsvcid": "$NVMF_PORT", 00:17:53.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.310 "hdgst": ${hdgst:-false}, 00:17:53.310 "ddgst": ${ddgst:-false} 00:17:53.310 }, 00:17:53.310 "method": "bdev_nvme_attach_controller" 00:17:53.310 } 00:17:53.310 EOF 00:17:53.310 )") 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=425515 00:17:53.310 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:53.311 { 00:17:53.311 "params": { 00:17:53.311 "name": "Nvme$subsystem", 00:17:53.311 "trtype": "$TEST_TRANSPORT", 00:17:53.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.311 "adrfam": "ipv4", 00:17:53.311 "trsvcid": "$NVMF_PORT", 00:17:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.311 "hdgst": ${hdgst:-false}, 00:17:53.311 "ddgst": ${ddgst:-false} 00:17:53.311 }, 00:17:53.311 "method": "bdev_nvme_attach_controller" 00:17:53.311 } 00:17:53.311 EOF 00:17:53.311 )") 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:53.311 { 00:17:53.311 "params": { 00:17:53.311 "name": "Nvme$subsystem", 00:17:53.311 "trtype": "$TEST_TRANSPORT", 00:17:53.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.311 "adrfam": "ipv4", 00:17:53.311 "trsvcid": "$NVMF_PORT", 00:17:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.311 "hdgst": ${hdgst:-false}, 00:17:53.311 "ddgst": ${ddgst:-false} 00:17:53.311 }, 00:17:53.311 "method": "bdev_nvme_attach_controller" 00:17:53.311 } 00:17:53.311 EOF 00:17:53.311 )") 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 425509 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:53.311 "params": { 00:17:53.311 "name": "Nvme1", 00:17:53.311 "trtype": "tcp", 00:17:53.311 "traddr": "10.0.0.2", 00:17:53.311 "adrfam": "ipv4", 00:17:53.311 "trsvcid": "4420", 00:17:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.311 "hdgst": false, 00:17:53.311 "ddgst": false 00:17:53.311 }, 00:17:53.311 "method": "bdev_nvme_attach_controller" 00:17:53.311 }' 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:53.311 "params": { 00:17:53.311 "name": "Nvme1", 00:17:53.311 "trtype": "tcp", 00:17:53.311 "traddr": "10.0.0.2", 00:17:53.311 "adrfam": "ipv4", 00:17:53.311 "trsvcid": "4420", 00:17:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.311 "hdgst": false, 00:17:53.311 "ddgst": false 00:17:53.311 }, 00:17:53.311 "method": "bdev_nvme_attach_controller" 00:17:53.311 }' 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:53.311 "params": { 00:17:53.311 "name": "Nvme1", 00:17:53.311 "trtype": "tcp", 00:17:53.311 "traddr": "10.0.0.2", 00:17:53.311 "adrfam": "ipv4", 00:17:53.311 "trsvcid": "4420", 00:17:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.311 "hdgst": false, 00:17:53.311 "ddgst": false 00:17:53.311 }, 00:17:53.311 "method": "bdev_nvme_attach_controller" 00:17:53.311 }' 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:53.311 03:17:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:53.311 "params": { 00:17:53.311 "name": "Nvme1", 00:17:53.311 "trtype": "tcp", 00:17:53.311 "traddr": "10.0.0.2", 00:17:53.311 "adrfam": "ipv4", 00:17:53.311 "trsvcid": "4420", 00:17:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.311 "hdgst": false, 00:17:53.311 "ddgst": false 00:17:53.311 }, 00:17:53.311 "method": "bdev_nvme_attach_controller" 00:17:53.311 }' 00:17:53.311 [2024-07-23 03:17:19.861542] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:53.311 [2024-07-23 03:17:19.861624] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:53.311 [2024-07-23 03:17:19.862609] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:53.311 [2024-07-23 03:17:19.862610] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:53.311 [2024-07-23 03:17:19.862609] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:53.311 [2024-07-23 03:17:19.862700] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 03:17:19.862701] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:53.311 [2024-07-23 03:17:19.862701] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:53.311 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:53.569 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.569 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.569 [2024-07-23 03:17:20.038218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.569 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.569 [2024-07-23 03:17:20.114059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:53.569 [2024-07-23 03:17:20.139893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.827 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.827 [2024-07-23 03:17:20.215252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:53.827 [2024-07-23 03:17:20.238142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.827 [2024-07-23 03:17:20.313366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.827 [2024-07-23 03:17:20.317249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:53.827 [2024-07-23 03:17:20.382174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:54.085 Running I/O for 1 seconds... 00:17:54.085 Running I/O for 1 seconds... 00:17:54.085 Running I/O for 1 seconds... 00:17:54.085 Running I/O for 1 seconds... 00:17:55.020 00:17:55.020 Latency(us) 00:17:55.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.020 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:55.020 Nvme1n1 : 1.01 11642.42 45.48 0.00 0.00 10950.55 6796.33 17670.45 00:17:55.020 =================================================================================================================== 00:17:55.020 Total : 11642.42 45.48 0.00 0.00 10950.55 6796.33 17670.45 00:17:55.020 00:17:55.020 Latency(us) 00:17:55.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.020 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:55.020 Nvme1n1 : 1.00 195160.57 762.35 0.00 0.00 653.26 291.27 898.09 00:17:55.020 =================================================================================================================== 00:17:55.020 Total : 195160.57 762.35 0.00 0.00 653.26 291.27 898.09 00:17:55.279 00:17:55.279 Latency(us) 00:17:55.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.279 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:55.279 Nvme1n1 : 1.01 11394.76 44.51 0.00 0.00 11169.94 6505.05 23495.87 00:17:55.279 =================================================================================================================== 00:17:55.279 Total : 11394.76 44.51 0.00 0.00 11169.94 6505.05 23495.87 00:17:55.279 00:17:55.279 Latency(us) 00:17:55.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.279 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:55.279 Nvme1n1 : 1.03 1099.69 4.30 0.00 0.00 115237.97 14757.74 139033.41 00:17:55.279 =================================================================================================================== 00:17:55.279 Total : 1099.69 4.30 0.00 0.00 115237.97 14757.74 139033.41 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 425510 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 425513 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 425515 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.537 03:17:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.537 rmmod nvme_tcp 00:17:55.537 rmmod nvme_fabrics 00:17:55.537 rmmod nvme_keyring 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 425367 ']' 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 425367 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 425367 ']' 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 425367 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 425367 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 425367' 00:17:55.537 killing process with pid 425367 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 425367 00:17:55.537 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 425367 00:17:55.797 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.797 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.797 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.797 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.797 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.797 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.797 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.797 03:17:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.333 03:17:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.333 00:17:58.333 real 0m7.072s 00:17:58.333 user 0m14.893s 00:17:58.333 sys 0m3.527s 00:17:58.333 03:17:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.333 03:17:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:58.333 ************************************ 00:17:58.333 END TEST nvmf_bdev_io_wait 00:17:58.333 ************************************ 00:17:58.333 03:17:24 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:58.333 03:17:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:58.333 03:17:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.333 03:17:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.333 ************************************ 00:17:58.333 START TEST nvmf_queue_depth 00:17:58.333 ************************************ 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:58.333 * Looking for test storage... 00:17:58.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.333 03:17:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.334 03:17:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.284 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:00.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:00.285 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:00.285 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:00.285 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:18:00.285 00:18:00.285 --- 10.0.0.2 ping statistics --- 00:18:00.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.285 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:18:00.285 00:18:00.285 --- 10.0.0.1 ping statistics --- 00:18:00.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.285 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=427750 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 427750 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 427750 ']' 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.285 [2024-07-23 03:17:26.567765] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:00.285 [2024-07-23 03:17:26.567848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.285 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.285 [2024-07-23 03:17:26.632408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.285 [2024-07-23 03:17:26.719990] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.285 [2024-07-23 03:17:26.720053] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.285 [2024-07-23 03:17:26.720067] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.285 [2024-07-23 03:17:26.720077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.285 [2024-07-23 03:17:26.720087] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.285 [2024-07-23 03:17:26.720113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:00.285 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.286 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.286 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.286 03:17:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.286 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.286 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.286 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 [2024-07-23 03:17:26.864724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 Malloc0 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 [2024-07-23 03:17:26.931241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=427775 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 427775 /var/tmp/bdevperf.sock 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 427775 ']' 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.544 03:17:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 [2024-07-23 03:17:26.979412] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:00.544 [2024-07-23 03:17:26.979501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427775 ] 00:18:00.544 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.544 [2024-07-23 03:17:27.042554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.803 [2024-07-23 03:17:27.135439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.803 03:17:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.803 03:17:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:00.803 03:17:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:00.803 03:17:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.803 03:17:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 NVMe0n1 00:18:01.061 03:17:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.061 03:17:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.061 Running I/O for 10 seconds... 00:18:13.265 00:18:13.265 Latency(us) 00:18:13.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.265 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:13.265 Verification LBA range: start 0x0 length 0x4000 00:18:13.265 NVMe0n1 : 10.10 8399.42 32.81 0.00 0.00 121405.30 24563.86 74953.77 00:18:13.265 =================================================================================================================== 00:18:13.265 Total : 8399.42 32.81 0.00 0.00 121405.30 24563.86 74953.77 00:18:13.265 0 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 427775 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 427775 ']' 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 427775 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 427775 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 427775' 00:18:13.265 killing process with pid 427775 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 427775 00:18:13.265 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.265 00:18:13.265 Latency(us) 00:18:13.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.265 =================================================================================================================== 00:18:13.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 427775 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.265 03:17:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.265 rmmod nvme_tcp 00:18:13.265 rmmod nvme_fabrics 00:18:13.265 rmmod nvme_keyring 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 427750 ']' 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 427750 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 427750 ']' 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 427750 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 427750 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 427750' 00:18:13.265 killing process with pid 427750 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 427750 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 427750 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.265 03:17:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.872 03:17:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:13.872 00:18:13.872 real 0m16.006s 00:18:13.872 user 0m22.427s 00:18:13.872 sys 0m3.159s 00:18:13.872 03:17:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:13.872 03:17:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.872 ************************************ 00:18:13.872 END TEST nvmf_queue_depth 00:18:13.873 ************************************ 00:18:13.873 03:17:40 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:13.873 03:17:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:13.873 03:17:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:13.873 03:17:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.873 ************************************ 00:18:13.873 START TEST nvmf_target_multipath 00:18:13.873 ************************************ 00:18:13.873 03:17:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:13.873 * Looking for test storage... 00:18:14.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.132 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:14.133 03:17:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:16.037 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:16.037 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:16.037 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:16.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.037 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:18:16.038 00:18:16.038 --- 10.0.0.2 ping statistics --- 00:18:16.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.038 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:18:16.038 00:18:16.038 --- 10.0.0.1 ping statistics --- 00:18:16.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.038 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:16.038 only one NIC for nvmf test 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.038 rmmod nvme_tcp 00:18:16.038 rmmod nvme_fabrics 00:18:16.038 rmmod nvme_keyring 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.038 03:17:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:18.574 00:18:18.574 real 0m4.231s 00:18:18.574 user 0m0.837s 00:18:18.574 sys 0m1.391s 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:18.574 03:17:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 ************************************ 00:18:18.574 END TEST nvmf_target_multipath 00:18:18.574 ************************************ 00:18:18.574 03:17:44 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:18.574 03:17:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:18.574 03:17:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:18.574 03:17:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 ************************************ 00:18:18.574 START TEST nvmf_zcopy 00:18:18.574 ************************************ 00:18:18.574 03:17:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:18.574 * Looking for test storage... 00:18:18.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.574 03:17:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.574 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:18.574 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.574 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.574 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.574 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:18.575 03:17:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:20.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:20.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:20.477 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:20.477 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.477 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:20.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:18:20.478 00:18:20.478 --- 10.0.0.2 ping statistics --- 00:18:20.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.478 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:18:20.478 00:18:20.478 --- 10.0.0.1 ping statistics --- 00:18:20.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.478 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=432818 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 432818 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 432818 ']' 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:20.478 03:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.478 [2024-07-23 03:17:46.823391] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:20.478 [2024-07-23 03:17:46.823488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.478 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.478 [2024-07-23 03:17:46.894007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.478 [2024-07-23 03:17:46.986761] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.478 [2024-07-23 03:17:46.986816] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.478 [2024-07-23 03:17:46.986840] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.478 [2024-07-23 03:17:46.986851] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.478 [2024-07-23 03:17:46.986861] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.478 [2024-07-23 03:17:46.986886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.737 [2024-07-23 03:17:47.130632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.737 [2024-07-23 03:17:47.146829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.737 malloc0 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.737 { 00:18:20.737 "params": { 00:18:20.737 "name": "Nvme$subsystem", 00:18:20.737 "trtype": "$TEST_TRANSPORT", 00:18:20.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.737 "adrfam": "ipv4", 00:18:20.737 "trsvcid": "$NVMF_PORT", 00:18:20.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.737 "hdgst": ${hdgst:-false}, 00:18:20.737 "ddgst": ${ddgst:-false} 00:18:20.737 }, 00:18:20.737 "method": "bdev_nvme_attach_controller" 00:18:20.737 } 00:18:20.737 EOF 00:18:20.737 )") 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:20.737 03:17:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.737 "params": { 00:18:20.737 "name": "Nvme1", 00:18:20.737 "trtype": "tcp", 00:18:20.737 "traddr": "10.0.0.2", 00:18:20.737 "adrfam": "ipv4", 00:18:20.737 "trsvcid": "4420", 00:18:20.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.737 "hdgst": false, 00:18:20.737 "ddgst": false 00:18:20.737 }, 00:18:20.737 "method": "bdev_nvme_attach_controller" 00:18:20.737 }' 00:18:20.737 [2024-07-23 03:17:47.225760] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:20.737 [2024-07-23 03:17:47.225855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432960 ] 00:18:20.737 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.737 [2024-07-23 03:17:47.286964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.996 [2024-07-23 03:17:47.378075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.254 Running I/O for 10 seconds... 00:18:33.493 00:18:33.493 Latency(us) 00:18:33.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.493 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:33.493 Verification LBA range: start 0x0 length 0x1000 00:18:33.493 Nvme1n1 : 10.06 6046.72 47.24 0.00 0.00 21040.67 3689.43 47962.64 00:18:33.493 =================================================================================================================== 00:18:33.493 Total : 6046.72 47.24 0.00 0.00 21040.67 3689.43 47962.64 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=434155 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.493 { 00:18:33.493 "params": { 00:18:33.493 "name": "Nvme$subsystem", 00:18:33.493 "trtype": "$TEST_TRANSPORT", 00:18:33.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.493 "adrfam": "ipv4", 00:18:33.493 "trsvcid": "$NVMF_PORT", 00:18:33.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.493 "hdgst": ${hdgst:-false}, 00:18:33.493 "ddgst": ${ddgst:-false} 00:18:33.493 }, 00:18:33.493 "method": "bdev_nvme_attach_controller" 00:18:33.493 } 00:18:33.493 EOF 00:18:33.493 )") 00:18:33.493 [2024-07-23 03:17:58.083404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.083458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:33.493 03:17:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:33.493 "params": { 00:18:33.493 "name": "Nvme1", 00:18:33.493 "trtype": "tcp", 00:18:33.493 "traddr": "10.0.0.2", 00:18:33.493 "adrfam": "ipv4", 00:18:33.493 "trsvcid": "4420", 00:18:33.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.493 "hdgst": false, 00:18:33.493 "ddgst": false 00:18:33.493 }, 00:18:33.493 "method": "bdev_nvme_attach_controller" 00:18:33.493 }' 00:18:33.493 [2024-07-23 03:17:58.091330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.091357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.099345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.099368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.107362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.107383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.115380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.115401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.121602] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:33.493 [2024-07-23 03:17:58.121690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434155 ] 00:18:33.493 [2024-07-23 03:17:58.123399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.123419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.131419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.131439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.139442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.139462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.147466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.147486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.493 [2024-07-23 03:17:58.155505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.155530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.163527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.493 [2024-07-23 03:17:58.163551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.493 [2024-07-23 03:17:58.171549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.171574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.179571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.179595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.185845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.494 [2024-07-23 03:17:58.187594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.187624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.195674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.195711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.203679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.203710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.211679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.211701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.219699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.219721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.227717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.227740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.235733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.235756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.243784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.243819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.251776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.251799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.259794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.259816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.267817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.267840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.275841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.275865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.281856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.494 [2024-07-23 03:17:58.283862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.283884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.291881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.291920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.299952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.300006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.308019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.308061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.316039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.316086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.324029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.324072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.332066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.332119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.340094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.340140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.348069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.348097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.356109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.356146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.364141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.364182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.372164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.372206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.380152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.380177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.388174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.388200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.396208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.396239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.404231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.404259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.412251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.412278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.420269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.420296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.428294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.428319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.436315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.436340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.444341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.444365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.452365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.452390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.460393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.460420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.468427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.468454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.476436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.476462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.484458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.484491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.492480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.492505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.500502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.500527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.508508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.508529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.516534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.516557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.524556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.524576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.532578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.532621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.540621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.540644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.548644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.548677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.556686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.556710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.564691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.564715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.572717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.572740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.580726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.580748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.588753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-23 03:17:58.588775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-23 03:17:58.596777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.596805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.604797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.604819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.612831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.612873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.620844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.620867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 Running I/O for 5 seconds... 00:18:33.495 [2024-07-23 03:17:58.628865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.628887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.642381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.642421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.652863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.652893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.663424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.663453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.675413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.675442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.684163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.684191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.696561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.696589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.705961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.705990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.716582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.716610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.726811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.726839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.736937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.736965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.746908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.746937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.757218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.757246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.767437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.767465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.777772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.777801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.788091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.788119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.800657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.800685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.810161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.810188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.820735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.820763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.831557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.831585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.841758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.841786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.852214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.852242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.864141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.864169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.873506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.873534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.884734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.884762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.896721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.896748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.905951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.905979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.917087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.917116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.927488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.927516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.937815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.937842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.950224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.950252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.959838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.959866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.970606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.970642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.980841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.980869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:58.991223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:58.991252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.003573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.003601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.013058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.013085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.024126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.024154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.034270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.034297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.044495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.044523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.054881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.054909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.067291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.067319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.076518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.076545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.087084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.087112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.099469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.099497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.108499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.108527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.119436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.119463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.131376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.131404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.140351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.140379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-23 03:17:59.153263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-23 03:17:59.153291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.164890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.164918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.173360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.173388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.185545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.185573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.196175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.196204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.207070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.207097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.218039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.218066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.228663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.228691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.241593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.241628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.251045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.251072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.262718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.262757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.273650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.273678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.284554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.284581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.296864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.296892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.306713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.306742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.317884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.317912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.329119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.329147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.340002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.340031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.350737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.350765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.361574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.361602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.372594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.372629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.383401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.383428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.393782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.393811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.404186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.404214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.414277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.414306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.425331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.425360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.435808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.435836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.446369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.446404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.457638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.457666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.468271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.468300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.479216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.479245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.489904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.489932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.502563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.502591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.512643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.512687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.523542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.523571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.534152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.534180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.545161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.545189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.555650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.555686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.566504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.566532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.578787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.578816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.588291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.588319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.600216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.600244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.610957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.610985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.621927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.621955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.632596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.632649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.643862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.643895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.654322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.654356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.665006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.665034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.677712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.677740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.687661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.687689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.699255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.699283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.709940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.709968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.720736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.720764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.733589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.496 [2024-07-23 03:17:59.733631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.496 [2024-07-23 03:17:59.743139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.743167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.754330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.754358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.765050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.765078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.776035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.776062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.786819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.786847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.797756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.797783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.808483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.808510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.819143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.819170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.829916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.829944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.840523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.840561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.853165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.853193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.863583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.863625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.874068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.874096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.884927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.884955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.895723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.895751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.906487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.906516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.917172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.917200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.927936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.927964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.938563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.938590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.949135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.949162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.959471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.959500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.971057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.971085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.981644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.981672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:17:59.992293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:17:59.992321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:18:00.005200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:18:00.005228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:18:00.021306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:18:00.021340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:18:00.031405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:18:00.031433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:18:00.042812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:18:00.042841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:18:00.053086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:18:00.053113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.497 [2024-07-23 03:18:00.063983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.497 [2024-07-23 03:18:00.064011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.075100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.075136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.086252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.086281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.098824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.098851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.108809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.108837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.120203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.120232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.130848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.130876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.141275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.141302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.152198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.152227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.163489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.163517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.173910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.173938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.184764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.184791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.195500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.195528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.206256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.206285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.218852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.218880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.228439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.228467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.239631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.239658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.250223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.250251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.261031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.261059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.272183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.272212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.283389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.283424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.296237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.296264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.306165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.306196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.317934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.317970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-23 03:18:00.329104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-23 03:18:00.329131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.016 [2024-07-23 03:18:00.339653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.016 [2024-07-23 03:18:00.339681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.016 [2024-07-23 03:18:00.352687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.016 [2024-07-23 03:18:00.352715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.016 [2024-07-23 03:18:00.363015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.016 [2024-07-23 03:18:00.363046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.016 [2024-07-23 03:18:00.373918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.016 [2024-07-23 03:18:00.373945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.016 [2024-07-23 03:18:00.384798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.384826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.395667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.395694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.407043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.407071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.418038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.418069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.428534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.428562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.439498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.439525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.450581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.450610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.462107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.462136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.473335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.473363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.484847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.484878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.496084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.496112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.507865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.507893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.521189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.521219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.530930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.530974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.542259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.542289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.552936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.552966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.563804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.563832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.576230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.576258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.017 [2024-07-23 03:18:00.586028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.017 [2024-07-23 03:18:00.586061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.597565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.597594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.608668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.608695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.621253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.621280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.631001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.631040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.642851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.642881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.653940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.653968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.664644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.664672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.674884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.674917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.685885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.685919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.696890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.696926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.709454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.709482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.718919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.718948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.730488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.730515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.741611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.741646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.752078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.752106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.764792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.764820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.776465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.776492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.785991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.786019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.797007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.797035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.807606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.807642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.818121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.818149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.830194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.830221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.839073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.839115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.276 [2024-07-23 03:18:00.850457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.276 [2024-07-23 03:18:00.850485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.860995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.861023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.871697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.871724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.882112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.882140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.892944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.892973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.904353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.904381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.914915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.914943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.925347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.925376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.936269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.936298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.947079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.947107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.957896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.957924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.968581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.968608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.979363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.979389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:00.990505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:00.990534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.001211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.001238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.012296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.012323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.023163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.023191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.034101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.034129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.044725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.044752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.055251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.055279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.065956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.065984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.076396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.076424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.089137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.089165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.098578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.098606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.535 [2024-07-23 03:18:01.109882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.535 [2024-07-23 03:18:01.109926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.120723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.120751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.131720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.131748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.142623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.142660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.153642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.153669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.164203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.164231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.174630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.174668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.185361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.185389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.195994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.196021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.206826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.206854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.217314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.217342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.227779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.227806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.238668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.238703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.249457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.249485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.260146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.260174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.272335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.272362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.282021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.282048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.292892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.292919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.303467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.303493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.314070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.314106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.326406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.326434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.336271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.336299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.347828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.347855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.793 [2024-07-23 03:18:01.358369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.793 [2024-07-23 03:18:01.358397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.368973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.369001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.381802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.381830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.391779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.391807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.402482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.402509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.413305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.413332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.425571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.425599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.435519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.435547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.446744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.446787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.457531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.457559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.468073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.468102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.478515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.478543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.489233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.489261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.501486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.501514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.511330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.511358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.522154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.522187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.532467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.532495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.543195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.543223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.553714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.553741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.564413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.564442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.574914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.574941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.585903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.585930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.596842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.596870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.607311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.607339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.051 [2024-07-23 03:18:01.618232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.051 [2024-07-23 03:18:01.618261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.630663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.630690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.642120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.642148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.651378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.651406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.662820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.662847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.673038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.673066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.684250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.684278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.694778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.694806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.705420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.705448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.716122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.716150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.726949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.726987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.739960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.739991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.749994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.750022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.761072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.761102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.772055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.772083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.782914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.782942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.793746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.793776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.804442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.804473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.815701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.815729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.827388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.827419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.839061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.839092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.850400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.850430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.861831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.861859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.872809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.872837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.310 [2024-07-23 03:18:01.883831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.310 [2024-07-23 03:18:01.883859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.897379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.897406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.907767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.907797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.919267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.919297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.930641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.930668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.941412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.941447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.952344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.952372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.963302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.963329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.974113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.974141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.985281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.985312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:01.996590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:01.996625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.007534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.007561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.018700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.018727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.029523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.029551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.040557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.040585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.051583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.051610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.062458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.062488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.073462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.073493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.084438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.084469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.095081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.095109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.106158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.106185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.117022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.117049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.127988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.128015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.569 [2024-07-23 03:18:02.140677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.569 [2024-07-23 03:18:02.140705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.150801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.150833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.162650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.162677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.173763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.173790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.186790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.186817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.196498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.196525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.207811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.207840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.218545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.218573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.229706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.229734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.240602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.240653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.251405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.251432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.262239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.262266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.273293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.273324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.284185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.284212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.294979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.295006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.305666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.305693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.316267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.316294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.326937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.326965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.337807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.337835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.348759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.348786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.359979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.360007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.370843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.370870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.383607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.383646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.828 [2024-07-23 03:18:02.393155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.828 [2024-07-23 03:18:02.393184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.404901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.404936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.416090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.416121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.426996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.427039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.440220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.440251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.450659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.450688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.461334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.461365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.472346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.472377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.483519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.483550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.086 [2024-07-23 03:18:02.496165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.086 [2024-07-23 03:18:02.496196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.506036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.506063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.517631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.517677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.528599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.528642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.539556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.539587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.552082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.552125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.561826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.561853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.573906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.573934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.584762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.584789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.597454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.597485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.606915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.606945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.618653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.618687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.631418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.631449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.641298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.641329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.087 [2024-07-23 03:18:02.652980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.087 [2024-07-23 03:18:02.653008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.663578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.663605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.675029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.675060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.685821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.685849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.696865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.696893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.708230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.708260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.719263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.719294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.730180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.730208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.740806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.740833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.751863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.751894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.762897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.762925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.775442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.775477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.784925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.345 [2024-07-23 03:18:02.784956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.345 [2024-07-23 03:18:02.796651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.796679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.807443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.807474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.818568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.818596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.835998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.836028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.846455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.846485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.857881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.857909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.868970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.868998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.879883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.879911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.890582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.890610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.901148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.901176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.346 [2024-07-23 03:18:02.912169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.346 [2024-07-23 03:18:02.912199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:02.923753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:02.923781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:02.935554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:02.935583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:02.946891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:02.946919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:02.958026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:02.958055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:02.968979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:02.969008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:02.979569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:02.979596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:02.990198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:02.990234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.001403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.001434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.012472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.012500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.023389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.023420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.033799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.033826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.044503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.044530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.055100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.055130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.065858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.065890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.078453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.078484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.088471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.088499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.099999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.100042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.111326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.111353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.122764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.604 [2024-07-23 03:18:03.122791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.604 [2024-07-23 03:18:03.133966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.605 [2024-07-23 03:18:03.133993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.605 [2024-07-23 03:18:03.146910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.605 [2024-07-23 03:18:03.146937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.605 [2024-07-23 03:18:03.156423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.605 [2024-07-23 03:18:03.156454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.605 [2024-07-23 03:18:03.167784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.605 [2024-07-23 03:18:03.167812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.605 [2024-07-23 03:18:03.178519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.605 [2024-07-23 03:18:03.178546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.863 [2024-07-23 03:18:03.189666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.863 [2024-07-23 03:18:03.189693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.863 [2024-07-23 03:18:03.200508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.200543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.211631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.211658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.222238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.222265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.232947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.232973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.243873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.243900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.256536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.256563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.266121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.266148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.277886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.277917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.289168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.289199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.300423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.300450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.311417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.311445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.322347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.322374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.333199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.333230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.344094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.344125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.354499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.354527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.364959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.364989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.375795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.375823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.386828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.386857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.397968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.397997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.408897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.408932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.419963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.419994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.864 [2024-07-23 03:18:03.431090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.864 [2024-07-23 03:18:03.431117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.441852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.441882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.454587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.454623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.464489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.464520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.475525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.475553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.488223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.488250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.498288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.498319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.509893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.509923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.520774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.520801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.531445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.531487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.542291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.542318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.552789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.552816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.563541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.563571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.574691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.574719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.585855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.585886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.596437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.596465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.607853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.607881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.618441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.618476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.629027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.629055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.639771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.639799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.647673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.647699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 00:18:37.123 Latency(us) 00:18:37.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.123 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:37.123 Nvme1n1 : 5.01 11775.99 92.00 0.00 0.00 10854.49 4733.16 21651.15 00:18:37.123 =================================================================================================================== 00:18:37.123 Total : 11775.99 92.00 0.00 0.00 10854.49 4733.16 21651.15 00:18:37.123 [2024-07-23 03:18:03.655708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.655733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.663717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.663744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.671777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.671825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.679803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.679851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.687818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.687867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.123 [2024-07-23 03:18:03.695835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.123 [2024-07-23 03:18:03.695884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.703859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.703906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.711881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.711929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.719912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.719961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.727938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.727986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.735952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.736000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.743971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.744022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.751986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.752035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.760015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.760064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.768046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.768096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.776027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.776056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.784043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.784072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.792107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.792154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.800130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.800179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.808112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.808146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.816118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.816144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.824193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.824243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.832220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.832265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.840186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.840211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.848207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.848231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 [2024-07-23 03:18:03.856229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.382 [2024-07-23 03:18:03.856253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (434155) - No such process 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 434155 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.382 delay0 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.382 03:18:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:37.382 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.640 [2024-07-23 03:18:03.978803] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:44.200 Initializing NVMe Controllers 00:18:44.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:44.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:44.201 Initialization complete. Launching workers. 00:18:44.201 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 161 00:18:44.201 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 448, failed to submit 33 00:18:44.201 success 287, unsuccess 161, failed 0 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.201 rmmod nvme_tcp 00:18:44.201 rmmod nvme_fabrics 00:18:44.201 rmmod nvme_keyring 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 432818 ']' 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 432818 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 432818 ']' 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 432818 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 432818 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 432818' 00:18:44.201 killing process with pid 432818 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 432818 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 432818 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.201 03:18:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.104 03:18:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.104 00:18:46.104 real 0m27.881s 00:18:46.104 user 0m41.456s 00:18:46.104 sys 0m8.412s 00:18:46.104 03:18:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:46.104 03:18:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:46.104 ************************************ 00:18:46.104 END TEST nvmf_zcopy 00:18:46.104 ************************************ 00:18:46.104 03:18:12 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:46.104 03:18:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:46.104 03:18:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:46.104 03:18:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.104 ************************************ 00:18:46.104 START TEST nvmf_nmic 00:18:46.104 ************************************ 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:46.104 * Looking for test storage... 00:18:46.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.104 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.363 03:18:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.363 03:18:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.363 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.363 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.363 03:18:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.363 03:18:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.263 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:48.263 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:48.264 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:48.264 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:48.264 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:48.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:18:48.264 00:18:48.264 --- 10.0.0.2 ping statistics --- 00:18:48.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.264 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:18:48.264 00:18:48.264 --- 10.0.0.1 ping statistics --- 00:18:48.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.264 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=438140 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 438140 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 438140 ']' 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:48.264 03:18:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.264 [2024-07-23 03:18:14.797865] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:48.264 [2024-07-23 03:18:14.797965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.264 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.522 [2024-07-23 03:18:14.864729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.522 [2024-07-23 03:18:14.956433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.523 [2024-07-23 03:18:14.956500] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.523 [2024-07-23 03:18:14.956513] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.523 [2024-07-23 03:18:14.956524] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.523 [2024-07-23 03:18:14.956533] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.523 [2024-07-23 03:18:14.956621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.523 [2024-07-23 03:18:14.956679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.523 [2024-07-23 03:18:14.956744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.523 [2024-07-23 03:18:14.956746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.523 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:48.523 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:48.523 03:18:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.523 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.523 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 [2024-07-23 03:18:15.104192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 Malloc0 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 [2024-07-23 03:18:15.155294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:48.781 test case1: single bdev can't be used in multiple subsystems 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 [2024-07-23 03:18:15.179194] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:48.781 [2024-07-23 03:18:15.179223] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:48.781 [2024-07-23 03:18:15.179252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.781 request: 00:18:48.781 { 00:18:48.781 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:48.781 "namespace": { 00:18:48.781 "bdev_name": "Malloc0", 00:18:48.781 "no_auto_visible": false 00:18:48.781 }, 00:18:48.781 "method": "nvmf_subsystem_add_ns", 00:18:48.781 "req_id": 1 00:18:48.781 } 00:18:48.781 Got JSON-RPC error response 00:18:48.781 response: 00:18:48.781 { 00:18:48.781 "code": -32602, 00:18:48.781 "message": "Invalid parameters" 00:18:48.781 } 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:48.781 Adding namespace failed - expected result. 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:48.781 test case2: host connect to nvmf target in multiple paths 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.781 [2024-07-23 03:18:15.187299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.781 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:49.346 03:18:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:49.912 03:18:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:49.912 03:18:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:49.912 03:18:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.912 03:18:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:49.912 03:18:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:52.436 03:18:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:52.436 03:18:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:52.436 03:18:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.436 03:18:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:52.436 03:18:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.436 03:18:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:52.436 03:18:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:52.436 [global] 00:18:52.436 thread=1 00:18:52.436 invalidate=1 00:18:52.436 rw=write 00:18:52.436 time_based=1 00:18:52.436 runtime=1 00:18:52.436 ioengine=libaio 00:18:52.436 direct=1 00:18:52.436 bs=4096 00:18:52.436 iodepth=1 00:18:52.436 norandommap=0 00:18:52.436 numjobs=1 00:18:52.436 00:18:52.436 verify_dump=1 00:18:52.436 verify_backlog=512 00:18:52.436 verify_state_save=0 00:18:52.436 do_verify=1 00:18:52.436 verify=crc32c-intel 00:18:52.436 [job0] 00:18:52.436 filename=/dev/nvme0n1 00:18:52.436 Could not set queue depth (nvme0n1) 00:18:52.436 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.436 fio-3.35 00:18:52.436 Starting 1 thread 00:18:53.399 00:18:53.399 job0: (groupid=0, jobs=1): err= 0: pid=438661: Tue Jul 23 03:18:19 2024 00:18:53.399 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:18:53.399 slat (nsec): min=9553, max=48421, avg=29332.73, stdev=9468.23 00:18:53.399 clat (usec): min=40493, max=41046, avg=40938.37, stdev=108.70 00:18:53.399 lat (usec): min=40503, max=41067, avg=40967.70, stdev=111.86 00:18:53.399 clat percentiles (usec): 00:18:53.399 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:53.399 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:53.399 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:53.399 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:53.399 | 99.99th=[41157] 00:18:53.399 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:18:53.399 slat (nsec): min=7950, max=62633, avg=10260.21, stdev=3931.78 00:18:53.399 clat (usec): min=178, max=353, avg=200.28, stdev=14.80 00:18:53.399 lat (usec): min=187, max=416, avg=210.54, stdev=16.52 00:18:53.399 clat percentiles (usec): 00:18:53.399 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:18:53.399 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:18:53.399 | 70.00th=[ 204], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 221], 00:18:53.399 | 99.00th=[ 245], 99.50th=[ 314], 99.90th=[ 355], 99.95th=[ 355], 00:18:53.399 | 99.99th=[ 355] 00:18:53.399 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.399 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.399 lat (usec) : 250=95.13%, 500=0.75% 00:18:53.399 lat (msec) : 50=4.12% 00:18:53.399 cpu : usr=0.30%, sys=0.50%, ctx=534, majf=0, minf=2 00:18:53.399 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.399 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.399 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.399 00:18:53.399 Run status group 0 (all jobs): 00:18:53.399 READ: bw=87.0KiB/s (89.1kB/s), 87.0KiB/s-87.0KiB/s (89.1kB/s-89.1kB/s), io=88.0KiB (90.1kB), run=1011-1011msec 00:18:53.399 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:18:53.399 00:18:53.399 Disk stats (read/write): 00:18:53.399 nvme0n1: ios=69/512, merge=0/0, ticks=817/103, in_queue=920, util=92.28% 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.399 03:18:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.399 rmmod nvme_tcp 00:18:53.399 rmmod nvme_fabrics 00:18:53.657 rmmod nvme_keyring 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 438140 ']' 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 438140 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 438140 ']' 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 438140 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 438140 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 438140' 00:18:53.657 killing process with pid 438140 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 438140 00:18:53.657 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 438140 00:18:53.917 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:53.917 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:53.917 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:53.917 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.917 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.917 03:18:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.917 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.917 03:18:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.820 03:18:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:55.820 00:18:55.820 real 0m9.701s 00:18:55.820 user 0m21.778s 00:18:55.821 sys 0m2.252s 00:18:55.821 03:18:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:55.821 03:18:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.821 ************************************ 00:18:55.821 END TEST nvmf_nmic 00:18:55.821 ************************************ 00:18:55.821 03:18:22 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:55.821 03:18:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:55.821 03:18:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:55.821 03:18:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:55.821 ************************************ 00:18:55.821 START TEST nvmf_fio_target 00:18:55.821 ************************************ 00:18:55.821 03:18:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:56.080 * Looking for test storage... 00:18:56.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.080 03:18:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:57.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:57.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:57.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:57.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:57.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:18:57.983 00:18:57.983 --- 10.0.0.2 ping statistics --- 00:18:57.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.983 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:18:57.983 00:18:57.983 --- 10.0.0.1 ping statistics --- 00:18:57.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.983 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:57.983 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=440795 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 440795 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 440795 ']' 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:58.243 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.243 [2024-07-23 03:18:24.611862] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:58.243 [2024-07-23 03:18:24.611962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.243 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.243 [2024-07-23 03:18:24.677691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.243 [2024-07-23 03:18:24.769835] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.243 [2024-07-23 03:18:24.769888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.243 [2024-07-23 03:18:24.769902] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.243 [2024-07-23 03:18:24.769914] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.243 [2024-07-23 03:18:24.769924] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.243 [2024-07-23 03:18:24.769989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.243 [2024-07-23 03:18:24.770054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.243 [2024-07-23 03:18:24.770118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.243 [2024-07-23 03:18:24.770121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.501 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:58.501 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:58.501 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.501 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.501 03:18:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.501 03:18:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.501 03:18:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:58.759 [2024-07-23 03:18:25.192368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.759 03:18:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.017 03:18:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:59.017 03:18:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.275 03:18:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:59.275 03:18:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.533 03:18:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:59.533 03:18:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.099 03:18:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:00.099 03:18:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:00.099 03:18:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.357 03:18:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:00.357 03:18:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.614 03:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:00.614 03:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.872 03:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:00.872 03:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:01.129 03:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.386 03:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:01.386 03:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.643 03:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:01.643 03:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.900 03:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.157 [2024-07-23 03:18:28.612016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.157 03:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:02.414 03:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:02.671 03:18:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:03.235 03:18:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:03.235 03:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:03.235 03:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.235 03:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:03.235 03:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:03.235 03:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:05.763 03:18:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:05.763 03:18:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:05.763 03:18:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.763 03:18:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:05.763 03:18:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.763 03:18:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:05.763 03:18:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:05.763 [global] 00:19:05.763 thread=1 00:19:05.763 invalidate=1 00:19:05.763 rw=write 00:19:05.763 time_based=1 00:19:05.763 runtime=1 00:19:05.763 ioengine=libaio 00:19:05.763 direct=1 00:19:05.763 bs=4096 00:19:05.763 iodepth=1 00:19:05.763 norandommap=0 00:19:05.763 numjobs=1 00:19:05.763 00:19:05.763 verify_dump=1 00:19:05.763 verify_backlog=512 00:19:05.763 verify_state_save=0 00:19:05.763 do_verify=1 00:19:05.763 verify=crc32c-intel 00:19:05.763 [job0] 00:19:05.763 filename=/dev/nvme0n1 00:19:05.763 [job1] 00:19:05.763 filename=/dev/nvme0n2 00:19:05.763 [job2] 00:19:05.763 filename=/dev/nvme0n3 00:19:05.763 [job3] 00:19:05.763 filename=/dev/nvme0n4 00:19:05.763 Could not set queue depth (nvme0n1) 00:19:05.763 Could not set queue depth (nvme0n2) 00:19:05.763 Could not set queue depth (nvme0n3) 00:19:05.763 Could not set queue depth (nvme0n4) 00:19:05.763 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.763 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.763 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.763 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.763 fio-3.35 00:19:05.763 Starting 4 threads 00:19:06.699 00:19:06.699 job0: (groupid=0, jobs=1): err= 0: pid=441804: Tue Jul 23 03:18:33 2024 00:19:06.699 read: IOPS=522, BW=2092KiB/s (2142kB/s)(2100KiB/1004msec) 00:19:06.699 slat (nsec): min=6986, max=55683, avg=10207.33, stdev=5080.75 00:19:06.699 clat (usec): min=292, max=41010, avg=1351.29, stdev=6319.46 00:19:06.699 lat (usec): min=301, max=41024, avg=1361.50, stdev=6320.02 00:19:06.699 clat percentiles (usec): 00:19:06.699 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 310], 20.00th=[ 318], 00:19:06.699 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 326], 60.00th=[ 330], 00:19:06.699 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 478], 95.00th=[ 529], 00:19:06.699 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:06.699 | 99.99th=[41157] 00:19:06.699 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:19:06.699 slat (usec): min=8, max=17329, avg=33.95, stdev=541.06 00:19:06.699 clat (usec): min=192, max=395, avg=242.45, stdev=27.16 00:19:06.699 lat (usec): min=202, max=17617, avg=276.40, stdev=543.28 00:19:06.699 clat percentiles (usec): 00:19:06.699 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 223], 00:19:06.699 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:19:06.699 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 302], 00:19:06.699 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 396], 00:19:06.699 | 99.99th=[ 396] 00:19:06.699 bw ( KiB/s): min= 8192, max= 8192, per=45.77%, avg=8192.00, stdev= 0.00, samples=1 00:19:06.699 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:06.699 lat (usec) : 250=48.29%, 500=48.68%, 750=2.19% 00:19:06.699 lat (msec) : 50=0.84% 00:19:06.699 cpu : usr=2.09%, sys=2.49%, ctx=1551, majf=0, minf=1 00:19:06.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.699 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.699 job1: (groupid=0, jobs=1): err= 0: pid=441805: Tue Jul 23 03:18:33 2024 00:19:06.699 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:06.699 slat (nsec): min=7398, max=56149, avg=16381.21, stdev=9434.26 00:19:06.699 clat (usec): min=426, max=41486, avg=569.23, stdev=1800.10 00:19:06.699 lat (usec): min=436, max=41504, avg=585.61, stdev=1800.87 00:19:06.699 clat percentiles (usec): 00:19:06.699 | 1.00th=[ 441], 5.00th=[ 453], 10.00th=[ 457], 20.00th=[ 461], 00:19:06.699 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 478], 60.00th=[ 482], 00:19:06.699 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 537], 95.00th=[ 553], 00:19:06.699 | 99.00th=[ 619], 99.50th=[ 1188], 99.90th=[41157], 99.95th=[41681], 00:19:06.699 | 99.99th=[41681] 00:19:06.699 write: IOPS=1526, BW=6106KiB/s (6252kB/s)(6112KiB/1001msec); 0 zone resets 00:19:06.699 slat (nsec): min=6223, max=72205, avg=13463.10, stdev=6786.20 00:19:06.699 clat (usec): min=205, max=1010, avg=241.69, stdev=38.74 00:19:06.699 lat (usec): min=213, max=1028, avg=255.16, stdev=41.93 00:19:06.699 clat percentiles (usec): 00:19:06.699 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:19:06.699 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 235], 00:19:06.699 | 70.00th=[ 241], 80.00th=[ 258], 90.00th=[ 289], 95.00th=[ 318], 00:19:06.699 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 408], 99.95th=[ 1012], 00:19:06.699 | 99.99th=[ 1012] 00:19:06.699 bw ( KiB/s): min= 4096, max= 4096, per=22.88%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.699 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.699 lat (usec) : 250=46.55%, 500=43.53%, 750=9.60%, 1000=0.04% 00:19:06.699 lat (msec) : 2=0.20%, 50=0.08% 00:19:06.699 cpu : usr=1.70%, sys=4.20%, ctx=2552, majf=0, minf=1 00:19:06.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.699 issued rwts: total=1024,1528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.699 job2: (groupid=0, jobs=1): err= 0: pid=441806: Tue Jul 23 03:18:33 2024 00:19:06.699 read: IOPS=1061, BW=4248KiB/s (4350kB/s)(4252KiB/1001msec) 00:19:06.699 slat (nsec): min=5578, max=47238, avg=10779.96, stdev=6815.47 00:19:06.699 clat (usec): min=434, max=695, avg=490.21, stdev=27.88 00:19:06.699 lat (usec): min=443, max=716, avg=500.99, stdev=31.27 00:19:06.699 clat percentiles (usec): 00:19:06.699 | 1.00th=[ 445], 5.00th=[ 457], 10.00th=[ 461], 20.00th=[ 469], 00:19:06.699 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 486], 60.00th=[ 494], 00:19:06.699 | 70.00th=[ 498], 80.00th=[ 510], 90.00th=[ 529], 95.00th=[ 537], 00:19:06.699 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 668], 99.95th=[ 693], 00:19:06.699 | 99.99th=[ 693] 00:19:06.699 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:06.699 slat (nsec): min=6774, max=72237, avg=14571.32, stdev=9971.92 00:19:06.699 clat (usec): min=225, max=690, avg=283.82, stdev=44.31 00:19:06.699 lat (usec): min=232, max=712, avg=298.39, stdev=50.13 00:19:06.699 clat percentiles (usec): 00:19:06.699 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:19:06.699 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 281], 00:19:06.699 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 375], 00:19:06.699 | 99.00th=[ 441], 99.50th=[ 469], 99.90th=[ 570], 99.95th=[ 693], 00:19:06.699 | 99.99th=[ 693] 00:19:06.699 bw ( KiB/s): min= 5560, max= 5560, per=31.06%, avg=5560.00, stdev= 0.00, samples=1 00:19:06.699 iops : min= 1390, max= 1390, avg=1390.00, stdev= 0.00, samples=1 00:19:06.699 lat (usec) : 250=11.58%, 500=76.49%, 750=11.93% 00:19:06.699 cpu : usr=3.20%, sys=3.90%, ctx=2599, majf=0, minf=1 00:19:06.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.699 issued rwts: total=1063,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.699 job3: (groupid=0, jobs=1): err= 0: pid=441807: Tue Jul 23 03:18:33 2024 00:19:06.699 read: IOPS=20, BW=81.7KiB/s (83.7kB/s)(84.0KiB/1028msec) 00:19:06.699 slat (nsec): min=8278, max=34241, avg=15796.24, stdev=6576.04 00:19:06.699 clat (usec): min=40799, max=41117, avg=40984.34, stdev=81.15 00:19:06.699 lat (usec): min=40833, max=41132, avg=41000.14, stdev=77.44 00:19:06.699 clat percentiles (usec): 00:19:06.699 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:06.699 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:06.700 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:06.700 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:06.700 | 99.99th=[41157] 00:19:06.700 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:19:06.700 slat (nsec): min=8478, max=70324, avg=17327.42, stdev=8741.99 00:19:06.700 clat (usec): min=217, max=478, avg=304.40, stdev=44.32 00:19:06.700 lat (usec): min=227, max=503, avg=321.73, stdev=45.72 00:19:06.700 clat percentiles (usec): 00:19:06.700 | 1.00th=[ 231], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:19:06.700 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:19:06.700 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 392], 00:19:06.700 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 478], 99.95th=[ 478], 00:19:06.700 | 99.99th=[ 478] 00:19:06.700 bw ( KiB/s): min= 4096, max= 4096, per=22.88%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.700 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.700 lat (usec) : 250=5.44%, 500=90.62% 00:19:06.700 lat (msec) : 50=3.94% 00:19:06.700 cpu : usr=0.68%, sys=0.97%, ctx=533, majf=0, minf=2 00:19:06.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.700 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.700 00:19:06.700 Run status group 0 (all jobs): 00:19:06.700 READ: bw=10.0MiB/s (10.5MB/s), 81.7KiB/s-4248KiB/s (83.7kB/s-4350kB/s), io=10.3MiB (10.8MB), run=1001-1028msec 00:19:06.700 WRITE: bw=17.5MiB/s (18.3MB/s), 1992KiB/s-6138KiB/s (2040kB/s-6285kB/s), io=18.0MiB (18.8MB), run=1001-1028msec 00:19:06.700 00:19:06.700 Disk stats (read/write): 00:19:06.700 nvme0n1: ios=546/1024, merge=0/0, ticks=1537/239, in_queue=1776, util=97.60% 00:19:06.700 nvme0n2: ios=980/1024, merge=0/0, ticks=572/244, in_queue=816, util=86.86% 00:19:06.700 nvme0n3: ios=1024/1037, merge=0/0, ticks=489/290, in_queue=779, util=88.90% 00:19:06.700 nvme0n4: ios=16/512, merge=0/0, ticks=656/152, in_queue=808, util=89.65% 00:19:06.700 03:18:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:06.700 [global] 00:19:06.700 thread=1 00:19:06.700 invalidate=1 00:19:06.700 rw=randwrite 00:19:06.700 time_based=1 00:19:06.700 runtime=1 00:19:06.700 ioengine=libaio 00:19:06.700 direct=1 00:19:06.700 bs=4096 00:19:06.700 iodepth=1 00:19:06.700 norandommap=0 00:19:06.700 numjobs=1 00:19:06.700 00:19:06.700 verify_dump=1 00:19:06.700 verify_backlog=512 00:19:06.700 verify_state_save=0 00:19:06.700 do_verify=1 00:19:06.700 verify=crc32c-intel 00:19:06.700 [job0] 00:19:06.700 filename=/dev/nvme0n1 00:19:06.700 [job1] 00:19:06.700 filename=/dev/nvme0n2 00:19:06.700 [job2] 00:19:06.700 filename=/dev/nvme0n3 00:19:06.700 [job3] 00:19:06.700 filename=/dev/nvme0n4 00:19:06.700 Could not set queue depth (nvme0n1) 00:19:06.700 Could not set queue depth (nvme0n2) 00:19:06.700 Could not set queue depth (nvme0n3) 00:19:06.700 Could not set queue depth (nvme0n4) 00:19:06.956 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.956 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.956 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.956 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.956 fio-3.35 00:19:06.956 Starting 4 threads 00:19:08.327 00:19:08.327 job0: (groupid=0, jobs=1): err= 0: pid=442037: Tue Jul 23 03:18:34 2024 00:19:08.327 read: IOPS=22, BW=89.6KiB/s (91.7kB/s)(92.0KiB/1027msec) 00:19:08.327 slat (nsec): min=13525, max=35390, avg=26846.65, stdev=9119.23 00:19:08.327 clat (usec): min=486, max=43970, avg=37599.37, stdev=11718.32 00:19:08.327 lat (usec): min=521, max=43989, avg=37626.21, stdev=11718.35 00:19:08.327 clat percentiles (usec): 00:19:08.327 | 1.00th=[ 486], 5.00th=[ 553], 10.00th=[40633], 20.00th=[41157], 00:19:08.327 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:08.327 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:19:08.327 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:19:08.327 | 99.99th=[43779] 00:19:08.327 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:19:08.327 slat (nsec): min=8953, max=61671, avg=19669.55, stdev=10076.41 00:19:08.327 clat (usec): min=202, max=798, avg=290.09, stdev=58.16 00:19:08.327 lat (usec): min=212, max=839, avg=309.75, stdev=60.76 00:19:08.327 clat percentiles (usec): 00:19:08.327 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 245], 00:19:08.327 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 289], 00:19:08.327 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 367], 95.00th=[ 400], 00:19:08.327 | 99.00th=[ 469], 99.50th=[ 506], 99.90th=[ 799], 99.95th=[ 799], 00:19:08.327 | 99.99th=[ 799] 00:19:08.327 bw ( KiB/s): min= 4096, max= 4096, per=31.98%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.327 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.327 lat (usec) : 250=23.18%, 500=72.15%, 750=0.56%, 1000=0.19% 00:19:08.327 lat (msec) : 50=3.93% 00:19:08.327 cpu : usr=0.97%, sys=0.97%, ctx=536, majf=0, minf=2 00:19:08.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.327 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.327 job1: (groupid=0, jobs=1): err= 0: pid=442038: Tue Jul 23 03:18:34 2024 00:19:08.327 read: IOPS=514, BW=2057KiB/s (2106kB/s)(2096KiB/1019msec) 00:19:08.327 slat (nsec): min=11909, max=54079, avg=19718.23, stdev=5871.16 00:19:08.327 clat (usec): min=326, max=41180, avg=1305.18, stdev=6081.33 00:19:08.327 lat (usec): min=343, max=41203, avg=1324.90, stdev=6081.52 00:19:08.327 clat percentiles (usec): 00:19:08.327 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:19:08.327 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 351], 60.00th=[ 359], 00:19:08.327 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 482], 95.00th=[ 562], 00:19:08.327 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:08.327 | 99.99th=[41157] 00:19:08.327 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:19:08.327 slat (nsec): min=9436, max=67033, avg=21595.68, stdev=8493.78 00:19:08.327 clat (usec): min=217, max=807, avg=286.30, stdev=36.53 00:19:08.327 lat (usec): min=230, max=818, avg=307.90, stdev=38.34 00:19:08.327 clat percentiles (usec): 00:19:08.328 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 262], 00:19:08.328 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:19:08.328 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 334], 95.00th=[ 359], 00:19:08.328 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 461], 99.95th=[ 807], 00:19:08.328 | 99.99th=[ 807] 00:19:08.328 bw ( KiB/s): min= 680, max= 7512, per=31.98%, avg=4096.00, stdev=4830.95, samples=2 00:19:08.328 iops : min= 170, max= 1878, avg=1024.00, stdev=1207.74, samples=2 00:19:08.328 lat (usec) : 250=5.49%, 500=91.67%, 750=2.00%, 1000=0.06% 00:19:08.328 lat (msec) : 50=0.78% 00:19:08.328 cpu : usr=1.67%, sys=4.91%, ctx=1549, majf=0, minf=1 00:19:08.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.328 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.328 job2: (groupid=0, jobs=1): err= 0: pid=442041: Tue Jul 23 03:18:34 2024 00:19:08.328 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:19:08.328 slat (nsec): min=9520, max=14762, avg=14178.81, stdev=1092.00 00:19:08.328 clat (usec): min=40955, max=41117, avg=40992.89, stdev=36.43 00:19:08.328 lat (usec): min=40970, max=41127, avg=41007.07, stdev=35.58 00:19:08.328 clat percentiles (usec): 00:19:08.328 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:08.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:08.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:08.328 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:08.328 | 99.99th=[41157] 00:19:08.328 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:19:08.328 slat (nsec): min=8783, max=43433, avg=10528.49, stdev=2579.52 00:19:08.328 clat (usec): min=253, max=975, avg=284.44, stdev=40.81 00:19:08.328 lat (usec): min=262, max=985, avg=294.96, stdev=41.12 00:19:08.328 clat percentiles (usec): 00:19:08.328 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:19:08.328 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:19:08.328 | 70.00th=[ 289], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:19:08.328 | 99.00th=[ 383], 99.50th=[ 537], 99.90th=[ 979], 99.95th=[ 979], 00:19:08.328 | 99.99th=[ 979] 00:19:08.328 bw ( KiB/s): min= 4087, max= 4087, per=31.91%, avg=4087.00, stdev= 0.00, samples=1 00:19:08.328 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:19:08.328 lat (usec) : 500=95.50%, 750=0.38%, 1000=0.19% 00:19:08.328 lat (msec) : 50=3.94% 00:19:08.328 cpu : usr=0.30%, sys=0.69%, ctx=534, majf=0, minf=1 00:19:08.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.328 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.328 job3: (groupid=0, jobs=1): err= 0: pid=442045: Tue Jul 23 03:18:34 2024 00:19:08.328 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:08.328 slat (nsec): min=5605, max=36158, avg=12242.52, stdev=5633.54 00:19:08.328 clat (usec): min=357, max=40989, avg=627.60, stdev=2855.75 00:19:08.328 lat (usec): min=363, max=41012, avg=639.84, stdev=2857.27 00:19:08.328 clat percentiles (usec): 00:19:08.328 | 1.00th=[ 363], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 379], 00:19:08.328 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 408], 60.00th=[ 416], 00:19:08.328 | 70.00th=[ 445], 80.00th=[ 465], 90.00th=[ 478], 95.00th=[ 490], 00:19:08.328 | 99.00th=[ 562], 99.50th=[13304], 99.90th=[41157], 99.95th=[41157], 00:19:08.328 | 99.99th=[41157] 00:19:08.328 write: IOPS=1238, BW=4955KiB/s (5074kB/s)(4960KiB/1001msec); 0 zone resets 00:19:08.328 slat (nsec): min=7171, max=55990, avg=16936.57, stdev=9484.58 00:19:08.328 clat (usec): min=189, max=1083, avg=252.77, stdev=55.83 00:19:08.328 lat (usec): min=196, max=1090, avg=269.71, stdev=59.50 00:19:08.328 clat percentiles (usec): 00:19:08.328 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:19:08.328 | 30.00th=[ 221], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 253], 00:19:08.328 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 314], 95.00th=[ 343], 00:19:08.328 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 783], 99.95th=[ 1090], 00:19:08.328 | 99.99th=[ 1090] 00:19:08.328 bw ( KiB/s): min= 4096, max= 4096, per=31.98%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.328 lat (usec) : 250=31.71%, 500=66.83%, 750=1.06%, 1000=0.04% 00:19:08.328 lat (msec) : 2=0.09%, 20=0.04%, 50=0.22% 00:19:08.328 cpu : usr=2.40%, sys=4.60%, ctx=2265, majf=0, minf=1 00:19:08.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.328 issued rwts: total=1024,1240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.328 00:19:08.328 Run status group 0 (all jobs): 00:19:08.328 READ: bw=6201KiB/s (6349kB/s), 82.8KiB/s-4092KiB/s (84.8kB/s-4190kB/s), io=6368KiB (6521kB), run=1001-1027msec 00:19:08.328 WRITE: bw=12.5MiB/s (13.1MB/s), 1994KiB/s-4955KiB/s (2042kB/s-5074kB/s), io=12.8MiB (13.5MB), run=1001-1027msec 00:19:08.328 00:19:08.328 Disk stats (read/write): 00:19:08.328 nvme0n1: ios=68/512, merge=0/0, ticks=810/146, in_queue=956, util=89.78% 00:19:08.328 nvme0n2: ios=543/1024, merge=0/0, ticks=1454/277, in_queue=1731, util=97.26% 00:19:08.328 nvme0n3: ios=44/512, merge=0/0, ticks=1644/141, in_queue=1785, util=93.74% 00:19:08.328 nvme0n4: ios=802/1024, merge=0/0, ticks=1484/256, in_queue=1740, util=97.90% 00:19:08.328 03:18:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:08.328 [global] 00:19:08.328 thread=1 00:19:08.328 invalidate=1 00:19:08.328 rw=write 00:19:08.328 time_based=1 00:19:08.328 runtime=1 00:19:08.328 ioengine=libaio 00:19:08.328 direct=1 00:19:08.328 bs=4096 00:19:08.328 iodepth=128 00:19:08.328 norandommap=0 00:19:08.328 numjobs=1 00:19:08.328 00:19:08.328 verify_dump=1 00:19:08.328 verify_backlog=512 00:19:08.328 verify_state_save=0 00:19:08.328 do_verify=1 00:19:08.328 verify=crc32c-intel 00:19:08.328 [job0] 00:19:08.328 filename=/dev/nvme0n1 00:19:08.328 [job1] 00:19:08.328 filename=/dev/nvme0n2 00:19:08.328 [job2] 00:19:08.328 filename=/dev/nvme0n3 00:19:08.328 [job3] 00:19:08.328 filename=/dev/nvme0n4 00:19:08.328 Could not set queue depth (nvme0n1) 00:19:08.328 Could not set queue depth (nvme0n2) 00:19:08.328 Could not set queue depth (nvme0n3) 00:19:08.328 Could not set queue depth (nvme0n4) 00:19:08.328 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.328 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.328 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.328 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.328 fio-3.35 00:19:08.328 Starting 4 threads 00:19:09.703 00:19:09.703 job0: (groupid=0, jobs=1): err= 0: pid=442383: Tue Jul 23 03:18:36 2024 00:19:09.703 read: IOPS=5554, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1005msec) 00:19:09.703 slat (usec): min=2, max=17487, avg=92.25, stdev=724.62 00:19:09.703 clat (usec): min=808, max=45818, avg=11784.35, stdev=4126.97 00:19:09.703 lat (usec): min=3475, max=45823, avg=11876.60, stdev=4182.52 00:19:09.703 clat percentiles (usec): 00:19:09.703 | 1.00th=[ 5145], 5.00th=[ 7308], 10.00th=[ 8979], 20.00th=[ 9765], 00:19:09.703 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11338], 00:19:09.703 | 70.00th=[11863], 80.00th=[12518], 90.00th=[15926], 95.00th=[18220], 00:19:09.703 | 99.00th=[28443], 99.50th=[34866], 99.90th=[45876], 99.95th=[45876], 00:19:09.703 | 99.99th=[45876] 00:19:09.703 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:19:09.703 slat (usec): min=3, max=17176, avg=76.14, stdev=600.79 00:19:09.703 clat (usec): min=389, max=40782, avg=10876.88, stdev=5082.45 00:19:09.703 lat (usec): min=713, max=40794, avg=10953.02, stdev=5122.11 00:19:09.703 clat percentiles (usec): 00:19:09.703 | 1.00th=[ 3064], 5.00th=[ 4948], 10.00th=[ 5932], 20.00th=[ 7177], 00:19:09.703 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11076], 00:19:09.703 | 70.00th=[11338], 80.00th=[12649], 90.00th=[14615], 95.00th=[21890], 00:19:09.703 | 99.00th=[29754], 99.50th=[35390], 99.90th=[35390], 99.95th=[36439], 00:19:09.703 | 99.99th=[40633] 00:19:09.703 bw ( KiB/s): min=20480, max=24576, per=34.04%, avg=22528.00, stdev=2896.31, samples=2 00:19:09.703 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:19:09.703 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:19:09.703 lat (msec) : 2=0.28%, 4=1.31%, 10=31.70%, 20=61.99%, 50=4.68% 00:19:09.703 cpu : usr=4.58%, sys=6.57%, ctx=427, majf=0, minf=15 00:19:09.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:09.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.703 issued rwts: total=5582,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.703 job1: (groupid=0, jobs=1): err= 0: pid=442384: Tue Jul 23 03:18:36 2024 00:19:09.703 read: IOPS=4339, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1010msec) 00:19:09.703 slat (usec): min=3, max=24760, avg=88.87, stdev=764.05 00:19:09.703 clat (usec): min=2476, max=87338, avg=12589.60, stdev=6937.51 00:19:09.703 lat (usec): min=3644, max=87352, avg=12678.47, stdev=6961.45 00:19:09.703 clat percentiles (usec): 00:19:09.703 | 1.00th=[ 5342], 5.00th=[ 6587], 10.00th=[ 8455], 20.00th=[ 9634], 00:19:09.703 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11207], 00:19:09.703 | 70.00th=[12518], 80.00th=[14484], 90.00th=[17433], 95.00th=[23462], 00:19:09.703 | 99.00th=[33424], 99.50th=[33817], 99.90th=[87557], 99.95th=[87557], 00:19:09.703 | 99.99th=[87557] 00:19:09.703 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:19:09.703 slat (usec): min=4, max=30970, avg=93.69, stdev=838.98 00:19:09.703 clat (usec): min=2381, max=80165, avg=13329.75, stdev=9676.57 00:19:09.703 lat (usec): min=2389, max=80182, avg=13423.44, stdev=9753.98 00:19:09.703 clat percentiles (usec): 00:19:09.703 | 1.00th=[ 3687], 5.00th=[ 5407], 10.00th=[ 6390], 20.00th=[ 6783], 00:19:09.703 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10945], 60.00th=[11469], 00:19:09.703 | 70.00th=[11863], 80.00th=[14746], 90.00th=[29754], 95.00th=[34866], 00:19:09.703 | 99.00th=[40109], 99.50th=[68682], 99.90th=[76022], 99.95th=[80217], 00:19:09.703 | 99.99th=[80217] 00:19:09.703 bw ( KiB/s): min=20480, max=20480, per=30.95%, avg=20480.00, stdev= 0.00, samples=2 00:19:09.703 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:09.703 lat (msec) : 4=1.17%, 10=34.11%, 20=53.45%, 50=10.59%, 100=0.69% 00:19:09.703 cpu : usr=7.23%, sys=10.11%, ctx=454, majf=0, minf=23 00:19:09.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:09.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.704 issued rwts: total=4383,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.704 job2: (groupid=0, jobs=1): err= 0: pid=442385: Tue Jul 23 03:18:36 2024 00:19:09.704 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:19:09.704 slat (usec): min=2, max=17359, avg=168.62, stdev=1083.23 00:19:09.704 clat (usec): min=12950, max=43499, avg=22524.51, stdev=6871.51 00:19:09.704 lat (usec): min=12964, max=43603, avg=22693.13, stdev=6939.72 00:19:09.704 clat percentiles (usec): 00:19:09.704 | 1.00th=[13042], 5.00th=[13698], 10.00th=[15533], 20.00th=[17171], 00:19:09.704 | 30.00th=[17695], 40.00th=[19268], 50.00th=[20579], 60.00th=[22152], 00:19:09.704 | 70.00th=[24773], 80.00th=[27919], 90.00th=[34341], 95.00th=[37487], 00:19:09.704 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:09.704 | 99.99th=[43254] 00:19:09.704 write: IOPS=2905, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1005msec); 0 zone resets 00:19:09.704 slat (usec): min=3, max=20723, avg=185.35, stdev=1156.80 00:19:09.704 clat (usec): min=1639, max=64306, avg=23925.60, stdev=14655.83 00:19:09.704 lat (usec): min=1648, max=64348, avg=24110.94, stdev=14775.88 00:19:09.704 clat percentiles (usec): 00:19:09.704 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[11863], 20.00th=[13960], 00:19:09.704 | 30.00th=[14484], 40.00th=[16188], 50.00th=[18744], 60.00th=[20317], 00:19:09.704 | 70.00th=[24511], 80.00th=[35914], 90.00th=[50594], 95.00th=[59507], 00:19:09.704 | 99.00th=[62653], 99.50th=[63701], 99.90th=[64226], 99.95th=[64226], 00:19:09.704 | 99.99th=[64226] 00:19:09.704 bw ( KiB/s): min=10072, max=12272, per=16.88%, avg=11172.00, stdev=1555.63, samples=2 00:19:09.704 iops : min= 2518, max= 3068, avg=2793.00, stdev=388.91, samples=2 00:19:09.704 lat (msec) : 2=0.16%, 4=0.02%, 10=3.69%, 20=47.66%, 50=42.99% 00:19:09.704 lat (msec) : 100=5.47% 00:19:09.704 cpu : usr=3.49%, sys=5.78%, ctx=194, majf=0, minf=15 00:19:09.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:09.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.704 issued rwts: total=2560,2920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.704 job3: (groupid=0, jobs=1): err= 0: pid=442386: Tue Jul 23 03:18:36 2024 00:19:09.704 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:19:09.704 slat (usec): min=2, max=16466, avg=149.08, stdev=909.95 00:19:09.704 clat (usec): min=7432, max=46315, avg=19545.89, stdev=7657.84 00:19:09.704 lat (usec): min=7449, max=46324, avg=19694.98, stdev=7728.64 00:19:09.704 clat percentiles (usec): 00:19:09.704 | 1.00th=[ 9896], 5.00th=[10290], 10.00th=[11863], 20.00th=[13173], 00:19:09.704 | 30.00th=[15270], 40.00th=[15664], 50.00th=[16712], 60.00th=[18744], 00:19:09.704 | 70.00th=[21627], 80.00th=[26608], 90.00th=[31065], 95.00th=[33817], 00:19:09.704 | 99.00th=[40633], 99.50th=[40633], 99.90th=[45351], 99.95th=[46400], 00:19:09.704 | 99.99th=[46400] 00:19:09.704 write: IOPS=3022, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1005msec); 0 zone resets 00:19:09.704 slat (usec): min=3, max=37410, avg=192.69, stdev=1320.80 00:19:09.704 clat (usec): min=988, max=122644, avg=25502.54, stdev=19538.50 00:19:09.704 lat (usec): min=1030, max=122654, avg=25695.24, stdev=19624.67 00:19:09.704 clat percentiles (msec): 00:19:09.704 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 14], 00:19:09.704 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 25], 00:19:09.704 | 70.00th=[ 28], 80.00th=[ 36], 90.00th=[ 51], 95.00th=[ 63], 00:19:09.704 | 99.00th=[ 112], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 112], 00:19:09.704 | 99.99th=[ 123] 00:19:09.704 bw ( KiB/s): min=10552, max=12736, per=17.59%, avg=11644.00, stdev=1544.32, samples=2 00:19:09.704 iops : min= 2638, max= 3184, avg=2911.00, stdev=386.08, samples=2 00:19:09.704 lat (usec) : 1000=0.02% 00:19:09.704 lat (msec) : 2=0.21%, 4=0.68%, 10=6.95%, 20=50.93%, 50=35.62% 00:19:09.704 lat (msec) : 100=4.47%, 250=1.13% 00:19:09.704 cpu : usr=4.68%, sys=4.78%, ctx=288, majf=0, minf=11 00:19:09.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:09.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.704 issued rwts: total=2560,3038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.704 00:19:09.704 Run status group 0 (all jobs): 00:19:09.704 READ: bw=58.3MiB/s (61.2MB/s), 9.95MiB/s-21.7MiB/s (10.4MB/s-22.8MB/s), io=58.9MiB (61.8MB), run=1005-1010msec 00:19:09.704 WRITE: bw=64.6MiB/s (67.8MB/s), 11.3MiB/s-21.9MiB/s (11.9MB/s-23.0MB/s), io=65.3MiB (68.4MB), run=1005-1010msec 00:19:09.704 00:19:09.704 Disk stats (read/write): 00:19:09.704 nvme0n1: ios=4788/5120, merge=0/0, ticks=49906/44621, in_queue=94527, util=91.28% 00:19:09.704 nvme0n2: ios=3602/4607, merge=0/0, ticks=39969/51680, in_queue=91649, util=97.97% 00:19:09.704 nvme0n3: ios=2326/2560, merge=0/0, ticks=29769/34592, in_queue=64361, util=98.01% 00:19:09.704 nvme0n4: ios=2048/2309, merge=0/0, ticks=20313/34215, in_queue=54528, util=86.08% 00:19:09.704 03:18:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:09.704 [global] 00:19:09.704 thread=1 00:19:09.704 invalidate=1 00:19:09.704 rw=randwrite 00:19:09.704 time_based=1 00:19:09.704 runtime=1 00:19:09.704 ioengine=libaio 00:19:09.704 direct=1 00:19:09.704 bs=4096 00:19:09.704 iodepth=128 00:19:09.704 norandommap=0 00:19:09.704 numjobs=1 00:19:09.704 00:19:09.704 verify_dump=1 00:19:09.704 verify_backlog=512 00:19:09.704 verify_state_save=0 00:19:09.704 do_verify=1 00:19:09.704 verify=crc32c-intel 00:19:09.704 [job0] 00:19:09.704 filename=/dev/nvme0n1 00:19:09.704 [job1] 00:19:09.704 filename=/dev/nvme0n2 00:19:09.704 [job2] 00:19:09.704 filename=/dev/nvme0n3 00:19:09.704 [job3] 00:19:09.704 filename=/dev/nvme0n4 00:19:09.704 Could not set queue depth (nvme0n1) 00:19:09.704 Could not set queue depth (nvme0n2) 00:19:09.704 Could not set queue depth (nvme0n3) 00:19:09.704 Could not set queue depth (nvme0n4) 00:19:09.962 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.962 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.962 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.962 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.962 fio-3.35 00:19:09.962 Starting 4 threads 00:19:11.382 00:19:11.382 job0: (groupid=0, jobs=1): err= 0: pid=442617: Tue Jul 23 03:18:37 2024 00:19:11.382 read: IOPS=4480, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1007msec) 00:19:11.382 slat (usec): min=3, max=7732, avg=110.17, stdev=616.28 00:19:11.382 clat (usec): min=2114, max=26272, avg=14746.53, stdev=2486.94 00:19:11.382 lat (usec): min=6816, max=26288, avg=14856.70, stdev=2522.87 00:19:11.382 clat percentiles (usec): 00:19:11.382 | 1.00th=[ 9372], 5.00th=[11600], 10.00th=[12256], 20.00th=[12911], 00:19:11.382 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:19:11.382 | 70.00th=[15401], 80.00th=[16319], 90.00th=[17957], 95.00th=[19530], 00:19:11.382 | 99.00th=[23200], 99.50th=[25035], 99.90th=[26084], 99.95th=[26346], 00:19:11.382 | 99.99th=[26346] 00:19:11.382 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:19:11.382 slat (usec): min=4, max=6853, avg=97.13, stdev=570.96 00:19:11.382 clat (usec): min=6789, max=24957, avg=13199.14, stdev=2812.86 00:19:11.382 lat (usec): min=6809, max=24976, avg=13296.27, stdev=2860.83 00:19:11.382 clat percentiles (usec): 00:19:11.382 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:19:11.382 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:19:11.382 | 70.00th=[13304], 80.00th=[14353], 90.00th=[18744], 95.00th=[20579], 00:19:11.382 | 99.00th=[20841], 99.50th=[21627], 99.90th=[24773], 99.95th=[25035], 00:19:11.382 | 99.99th=[25035] 00:19:11.382 bw ( KiB/s): min=17256, max=19608, per=33.63%, avg=18432.00, stdev=1663.12, samples=2 00:19:11.382 iops : min= 4314, max= 4902, avg=4608.00, stdev=415.78, samples=2 00:19:11.382 lat (msec) : 4=0.01%, 10=2.32%, 20=91.58%, 50=6.09% 00:19:11.382 cpu : usr=7.55%, sys=11.43%, ctx=295, majf=0, minf=1 00:19:11.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:11.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.382 issued rwts: total=4512,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.382 job1: (groupid=0, jobs=1): err= 0: pid=442618: Tue Jul 23 03:18:37 2024 00:19:11.382 read: IOPS=3692, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:19:11.382 slat (usec): min=2, max=15197, avg=113.41, stdev=770.63 00:19:11.382 clat (usec): min=1476, max=58359, avg=14927.38, stdev=6879.01 00:19:11.382 lat (usec): min=4976, max=58375, avg=15040.79, stdev=6950.31 00:19:11.382 clat percentiles (usec): 00:19:11.382 | 1.00th=[ 7635], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10028], 00:19:11.382 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12911], 60.00th=[14615], 00:19:11.382 | 70.00th=[17171], 80.00th=[18482], 90.00th=[22676], 95.00th=[27132], 00:19:11.382 | 99.00th=[41681], 99.50th=[52691], 99.90th=[58459], 99.95th=[58459], 00:19:11.382 | 99.99th=[58459] 00:19:11.382 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:19:11.382 slat (usec): min=4, max=14136, avg=119.04, stdev=725.93 00:19:11.382 clat (usec): min=333, max=80066, avg=17599.08, stdev=13295.28 00:19:11.382 lat (usec): min=351, max=80076, avg=17718.12, stdev=13373.37 00:19:11.382 clat percentiles (usec): 00:19:11.382 | 1.00th=[ 4883], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 8160], 00:19:11.382 | 30.00th=[ 8848], 40.00th=[10683], 50.00th=[12780], 60.00th=[18220], 00:19:11.382 | 70.00th=[19792], 80.00th=[23200], 90.00th=[34341], 95.00th=[46400], 00:19:11.382 | 99.00th=[77071], 99.50th=[78119], 99.90th=[80217], 99.95th=[80217], 00:19:11.382 | 99.99th=[80217] 00:19:11.382 bw ( KiB/s): min=12288, max=20472, per=29.89%, avg=16380.00, stdev=5786.96, samples=2 00:19:11.382 iops : min= 3072, max= 5118, avg=4095.00, stdev=1446.74, samples=2 00:19:11.382 lat (usec) : 500=0.17%, 750=0.09% 00:19:11.382 lat (msec) : 2=0.05%, 4=0.01%, 10=26.77%, 20=52.88%, 50=17.92% 00:19:11.382 lat (msec) : 100=2.11% 00:19:11.382 cpu : usr=7.57%, sys=8.47%, ctx=319, majf=0, minf=1 00:19:11.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:11.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.382 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.382 job2: (groupid=0, jobs=1): err= 0: pid=442619: Tue Jul 23 03:18:37 2024 00:19:11.382 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:19:11.382 slat (usec): min=4, max=14288, avg=268.44, stdev=1388.22 00:19:11.382 clat (usec): min=15809, max=53568, avg=33619.98, stdev=10581.19 00:19:11.382 lat (usec): min=16544, max=56896, avg=33888.42, stdev=10597.35 00:19:11.382 clat percentiles (usec): 00:19:11.382 | 1.00th=[17433], 5.00th=[19792], 10.00th=[20579], 20.00th=[20579], 00:19:11.382 | 30.00th=[25560], 40.00th=[30802], 50.00th=[34341], 60.00th=[37487], 00:19:11.382 | 70.00th=[41157], 80.00th=[44303], 90.00th=[46924], 95.00th=[49546], 00:19:11.382 | 99.00th=[51119], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:19:11.382 | 99.99th=[53740] 00:19:11.382 write: IOPS=1858, BW=7436KiB/s (7614kB/s)(7488KiB/1007msec); 0 zone resets 00:19:11.382 slat (usec): min=4, max=6280, avg=299.98, stdev=988.19 00:19:11.382 clat (usec): min=5868, max=58692, avg=39353.16, stdev=8635.58 00:19:11.382 lat (usec): min=6758, max=58745, avg=39653.14, stdev=8650.69 00:19:11.382 clat percentiles (usec): 00:19:11.382 | 1.00th=[ 9110], 5.00th=[20317], 10.00th=[29230], 20.00th=[34341], 00:19:11.382 | 30.00th=[36963], 40.00th=[38536], 50.00th=[40633], 60.00th=[42730], 00:19:11.382 | 70.00th=[43779], 80.00th=[45876], 90.00th=[48497], 95.00th=[50070], 00:19:11.382 | 99.00th=[53740], 99.50th=[55313], 99.90th=[57410], 99.95th=[58459], 00:19:11.382 | 99.99th=[58459] 00:19:11.382 bw ( KiB/s): min= 5768, max= 8208, per=12.75%, avg=6988.00, stdev=1725.34, samples=2 00:19:11.382 iops : min= 1442, max= 2052, avg=1747.00, stdev=431.34, samples=2 00:19:11.382 lat (msec) : 10=0.73%, 20=4.46%, 50=90.29%, 100=4.52% 00:19:11.382 cpu : usr=3.58%, sys=5.27%, ctx=296, majf=0, minf=1 00:19:11.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:11.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.382 issued rwts: total=1536,1872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.382 job3: (groupid=0, jobs=1): err= 0: pid=442620: Tue Jul 23 03:18:37 2024 00:19:11.382 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:19:11.382 slat (usec): min=3, max=16575, avg=142.59, stdev=937.80 00:19:11.382 clat (usec): min=1541, max=78011, avg=18384.33, stdev=10015.19 00:19:11.383 lat (usec): min=1602, max=78017, avg=18526.92, stdev=10096.31 00:19:11.383 clat percentiles (usec): 00:19:11.383 | 1.00th=[ 1893], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[11469], 00:19:11.383 | 30.00th=[12911], 40.00th=[14746], 50.00th=[17171], 60.00th=[18482], 00:19:11.383 | 70.00th=[19268], 80.00th=[25035], 90.00th=[28181], 95.00th=[31327], 00:19:11.383 | 99.00th=[63177], 99.50th=[69731], 99.90th=[78119], 99.95th=[78119], 00:19:11.383 | 99.99th=[78119] 00:19:11.383 write: IOPS=3210, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1003msec); 0 zone resets 00:19:11.383 slat (usec): min=3, max=12537, avg=156.63, stdev=817.81 00:19:11.383 clat (usec): min=349, max=94462, avg=21983.54, stdev=19018.92 00:19:11.383 lat (usec): min=1016, max=94485, avg=22140.17, stdev=19137.71 00:19:11.383 clat percentiles (usec): 00:19:11.383 | 1.00th=[ 4621], 5.00th=[ 6783], 10.00th=[ 8979], 20.00th=[11600], 00:19:11.383 | 30.00th=[12387], 40.00th=[14222], 50.00th=[15533], 60.00th=[19530], 00:19:11.383 | 70.00th=[20055], 80.00th=[20579], 90.00th=[52691], 95.00th=[74974], 00:19:11.383 | 99.00th=[81265], 99.50th=[88605], 99.90th=[94897], 99.95th=[94897], 00:19:11.383 | 99.99th=[94897] 00:19:11.383 bw ( KiB/s): min= 8688, max=16048, per=22.57%, avg=12368.00, stdev=5204.31, samples=2 00:19:11.383 iops : min= 2172, max= 4012, avg=3092.00, stdev=1301.08, samples=2 00:19:11.383 lat (usec) : 500=0.02% 00:19:11.383 lat (msec) : 2=0.97%, 4=1.27%, 10=7.29%, 20=62.33%, 50=21.68% 00:19:11.383 lat (msec) : 100=6.44% 00:19:11.383 cpu : usr=4.99%, sys=7.68%, ctx=322, majf=0, minf=1 00:19:11.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:11.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.383 issued rwts: total=3072,3220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.383 00:19:11.383 Run status group 0 (all jobs): 00:19:11.383 READ: bw=49.8MiB/s (52.2MB/s), 6101KiB/s-17.5MiB/s (6248kB/s-18.4MB/s), io=50.1MiB (52.6MB), run=1003-1007msec 00:19:11.383 WRITE: bw=53.5MiB/s (56.1MB/s), 7436KiB/s-17.9MiB/s (7614kB/s-18.7MB/s), io=53.9MiB (56.5MB), run=1003-1007msec 00:19:11.383 00:19:11.383 Disk stats (read/write): 00:19:11.383 nvme0n1: ios=3675/4096, merge=0/0, ticks=26536/24558, in_queue=51094, util=98.00% 00:19:11.383 nvme0n2: ios=3087/3463, merge=0/0, ticks=41717/58129, in_queue=99846, util=90.96% 00:19:11.383 nvme0n3: ios=1336/1536, merge=0/0, ticks=12242/15322, in_queue=27564, util=98.02% 00:19:11.383 nvme0n4: ios=2585/2609, merge=0/0, ticks=39482/48587, in_queue=88069, util=98.32% 00:19:11.383 03:18:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:11.383 03:18:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=442758 00:19:11.383 03:18:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:11.383 03:18:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:11.383 [global] 00:19:11.383 thread=1 00:19:11.383 invalidate=1 00:19:11.383 rw=read 00:19:11.383 time_based=1 00:19:11.383 runtime=10 00:19:11.383 ioengine=libaio 00:19:11.383 direct=1 00:19:11.383 bs=4096 00:19:11.383 iodepth=1 00:19:11.383 norandommap=1 00:19:11.383 numjobs=1 00:19:11.383 00:19:11.383 [job0] 00:19:11.383 filename=/dev/nvme0n1 00:19:11.383 [job1] 00:19:11.383 filename=/dev/nvme0n2 00:19:11.383 [job2] 00:19:11.383 filename=/dev/nvme0n3 00:19:11.383 [job3] 00:19:11.383 filename=/dev/nvme0n4 00:19:11.383 Could not set queue depth (nvme0n1) 00:19:11.383 Could not set queue depth (nvme0n2) 00:19:11.383 Could not set queue depth (nvme0n3) 00:19:11.383 Could not set queue depth (nvme0n4) 00:19:11.383 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.383 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.383 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.383 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.383 fio-3.35 00:19:11.383 Starting 4 threads 00:19:14.663 03:18:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:14.663 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=31363072, buflen=4096 00:19:14.663 fio: pid=442849, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.663 03:18:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:14.663 03:18:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.663 03:18:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:14.663 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=319488, buflen=4096 00:19:14.663 fio: pid=442848, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.921 03:18:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.921 03:18:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:14.921 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=20742144, buflen=4096 00:19:14.921 fio: pid=442846, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:15.180 03:18:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.180 03:18:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:15.180 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=868352, buflen=4096 00:19:15.180 fio: pid=442847, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:15.180 00:19:15.180 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=442846: Tue Jul 23 03:18:41 2024 00:19:15.180 read: IOPS=1463, BW=5851KiB/s (5991kB/s)(19.8MiB/3462msec) 00:19:15.180 slat (usec): min=4, max=12751, avg=25.88, stdev=179.17 00:19:15.180 clat (usec): min=305, max=41167, avg=647.49, stdev=2477.55 00:19:15.180 lat (usec): min=314, max=53919, avg=673.37, stdev=2524.58 00:19:15.180 clat percentiles (usec): 00:19:15.180 | 1.00th=[ 330], 5.00th=[ 445], 10.00th=[ 461], 20.00th=[ 478], 00:19:15.180 | 30.00th=[ 486], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 506], 00:19:15.180 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 529], 95.00th=[ 537], 00:19:15.180 | 99.00th=[ 586], 99.50th=[ 644], 99.90th=[41157], 99.95th=[41157], 00:19:15.180 | 99.99th=[41157] 00:19:15.180 bw ( KiB/s): min= 2568, max= 7816, per=48.22%, avg=6737.33, stdev=2051.09, samples=6 00:19:15.180 iops : min= 642, max= 1954, avg=1684.33, stdev=512.77, samples=6 00:19:15.180 lat (usec) : 500=50.82%, 750=48.77% 00:19:15.180 lat (msec) : 4=0.02%, 50=0.38% 00:19:15.180 cpu : usr=1.76%, sys=3.70%, ctx=5067, majf=0, minf=1 00:19:15.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.180 issued rwts: total=5065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.180 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=442847: Tue Jul 23 03:18:41 2024 00:19:15.180 read: IOPS=57, BW=228KiB/s (233kB/s)(848KiB/3725msec) 00:19:15.180 slat (usec): min=5, max=6973, avg=82.02, stdev=665.76 00:19:15.180 clat (usec): min=284, max=41348, avg=17390.06, stdev=20121.30 00:19:15.180 lat (usec): min=290, max=47931, avg=17439.57, stdev=20177.29 00:19:15.180 clat percentiles (usec): 00:19:15.180 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 302], 00:19:15.180 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[41157], 00:19:15.180 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:15.180 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:15.180 | 99.99th=[41157] 00:19:15.180 bw ( KiB/s): min= 96, max= 894, per=1.52%, avg=213.43, stdev=300.13, samples=7 00:19:15.180 iops : min= 24, max= 223, avg=53.29, stdev=74.84, samples=7 00:19:15.180 lat (usec) : 500=56.81%, 750=0.94% 00:19:15.180 lat (msec) : 50=41.78% 00:19:15.180 cpu : usr=0.11%, sys=0.24%, ctx=217, majf=0, minf=1 00:19:15.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.180 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.180 issued rwts: total=213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.180 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=442848: Tue Jul 23 03:18:41 2024 00:19:15.180 read: IOPS=24, BW=98.0KiB/s (100kB/s)(312KiB/3185msec) 00:19:15.180 slat (nsec): min=12437, max=40861, avg=23791.57, stdev=9441.64 00:19:15.180 clat (usec): min=549, max=45125, avg=40507.63, stdev=4607.80 00:19:15.180 lat (usec): min=584, max=45144, avg=40531.56, stdev=4606.54 00:19:15.180 clat percentiles (usec): 00:19:15.180 | 1.00th=[ 553], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:15.180 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:15.180 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:15.180 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:19:15.180 | 99.99th=[45351] 00:19:15.181 bw ( KiB/s): min= 96, max= 104, per=0.70%, avg=98.67, stdev= 4.13, samples=6 00:19:15.181 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:19:15.181 lat (usec) : 750=1.27% 00:19:15.181 lat (msec) : 50=97.47% 00:19:15.181 cpu : usr=0.13%, sys=0.00%, ctx=79, majf=0, minf=1 00:19:15.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.181 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.181 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.181 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=442849: Tue Jul 23 03:18:41 2024 00:19:15.181 read: IOPS=2635, BW=10.3MiB/s (10.8MB/s)(29.9MiB/2906msec) 00:19:15.181 slat (nsec): min=4913, max=67038, avg=14127.33, stdev=8747.56 00:19:15.181 clat (usec): min=287, max=2717, avg=360.42, stdev=49.03 00:19:15.181 lat (usec): min=293, max=2724, avg=374.55, stdev=54.35 00:19:15.181 clat percentiles (usec): 00:19:15.181 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:19:15.181 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:19:15.181 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 445], 00:19:15.181 | 99.00th=[ 482], 99.50th=[ 490], 99.90th=[ 537], 99.95th=[ 644], 00:19:15.181 | 99.99th=[ 2704] 00:19:15.181 bw ( KiB/s): min= 9552, max=11232, per=75.32%, avg=10523.20, stdev=650.70, samples=5 00:19:15.181 iops : min= 2388, max= 2808, avg=2630.80, stdev=162.68, samples=5 00:19:15.181 lat (usec) : 500=99.73%, 750=0.22%, 1000=0.01% 00:19:15.181 lat (msec) : 2=0.01%, 4=0.01% 00:19:15.181 cpu : usr=2.48%, sys=4.75%, ctx=7659, majf=0, minf=1 00:19:15.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.181 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.181 issued rwts: total=7658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.181 00:19:15.181 Run status group 0 (all jobs): 00:19:15.181 READ: bw=13.6MiB/s (14.3MB/s), 98.0KiB/s-10.3MiB/s (100kB/s-10.8MB/s), io=50.8MiB (53.3MB), run=2906-3725msec 00:19:15.181 00:19:15.181 Disk stats (read/write): 00:19:15.181 nvme0n1: ios=5061/0, merge=0/0, ticks=3076/0, in_queue=3076, util=95.62% 00:19:15.181 nvme0n2: ios=209/0, merge=0/0, ticks=3561/0, in_queue=3561, util=96.41% 00:19:15.181 nvme0n3: ios=76/0, merge=0/0, ticks=3080/0, in_queue=3080, util=96.79% 00:19:15.181 nvme0n4: ios=7540/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.75% 00:19:15.439 03:18:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.439 03:18:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:15.697 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.697 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:15.955 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.955 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:16.213 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.213 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:16.471 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:16.471 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 442758 00:19:16.471 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:16.471 03:18:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:16.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.471 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:16.471 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:16.471 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:16.471 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.728 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:16.728 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.728 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:16.728 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:16.728 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:16.728 nvmf hotplug test: fio failed as expected 00:19:16.729 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.987 rmmod nvme_tcp 00:19:16.987 rmmod nvme_fabrics 00:19:16.987 rmmod nvme_keyring 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 440795 ']' 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 440795 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 440795 ']' 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 440795 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 440795 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 440795' 00:19:16.987 killing process with pid 440795 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 440795 00:19:16.987 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 440795 00:19:17.245 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:17.245 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:17.245 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:17.245 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.245 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.245 03:18:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.245 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.245 03:18:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.147 03:18:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.147 00:19:19.147 real 0m23.321s 00:19:19.147 user 1m21.775s 00:19:19.147 sys 0m6.690s 00:19:19.147 03:18:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:19.147 03:18:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.147 ************************************ 00:19:19.147 END TEST nvmf_fio_target 00:19:19.147 ************************************ 00:19:19.147 03:18:45 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:19.147 03:18:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:19.147 03:18:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:19.147 03:18:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:19.406 ************************************ 00:19:19.406 START TEST nvmf_bdevio 00:19:19.406 ************************************ 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:19.406 * Looking for test storage... 00:19:19.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.406 03:18:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.407 03:18:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:21.309 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:21.309 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:21.309 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:21.309 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:21.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:21.309 00:19:21.309 --- 10.0.0.2 ping statistics --- 00:19:21.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.309 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:21.309 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:19:21.310 00:19:21.310 --- 10.0.0.1 ping statistics --- 00:19:21.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.310 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=445465 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 445465 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 445465 ']' 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.310 03:18:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.310 [2024-07-23 03:18:47.854454] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:21.310 [2024-07-23 03:18:47.854547] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.568 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.568 [2024-07-23 03:18:47.921704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.568 [2024-07-23 03:18:48.013750] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.568 [2024-07-23 03:18:48.013814] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.568 [2024-07-23 03:18:48.013844] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.568 [2024-07-23 03:18:48.013856] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.568 [2024-07-23 03:18:48.013866] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.568 [2024-07-23 03:18:48.014001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:21.568 [2024-07-23 03:18:48.014065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:21.568 [2024-07-23 03:18:48.014132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:21.568 [2024-07-23 03:18:48.014134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.568 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.568 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:21.569 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.569 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.569 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.827 [2024-07-23 03:18:48.167487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.827 Malloc0 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.827 [2024-07-23 03:18:48.220888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:21.827 { 00:19:21.827 "params": { 00:19:21.827 "name": "Nvme$subsystem", 00:19:21.827 "trtype": "$TEST_TRANSPORT", 00:19:21.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.827 "adrfam": "ipv4", 00:19:21.827 "trsvcid": "$NVMF_PORT", 00:19:21.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.827 "hdgst": ${hdgst:-false}, 00:19:21.827 "ddgst": ${ddgst:-false} 00:19:21.827 }, 00:19:21.827 "method": "bdev_nvme_attach_controller" 00:19:21.827 } 00:19:21.827 EOF 00:19:21.827 )") 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:21.827 03:18:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:21.827 "params": { 00:19:21.827 "name": "Nvme1", 00:19:21.827 "trtype": "tcp", 00:19:21.827 "traddr": "10.0.0.2", 00:19:21.827 "adrfam": "ipv4", 00:19:21.827 "trsvcid": "4420", 00:19:21.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.827 "hdgst": false, 00:19:21.827 "ddgst": false 00:19:21.827 }, 00:19:21.827 "method": "bdev_nvme_attach_controller" 00:19:21.827 }' 00:19:21.827 [2024-07-23 03:18:48.268837] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:21.827 [2024-07-23 03:18:48.268926] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445496 ] 00:19:21.827 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.827 [2024-07-23 03:18:48.334620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.086 [2024-07-23 03:18:48.426547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.086 [2024-07-23 03:18:48.426597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.086 [2024-07-23 03:18:48.426600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.344 I/O targets: 00:19:22.344 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:22.344 00:19:22.344 00:19:22.344 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.344 http://cunit.sourceforge.net/ 00:19:22.344 00:19:22.344 00:19:22.344 Suite: bdevio tests on: Nvme1n1 00:19:22.344 Test: blockdev write read block ...passed 00:19:22.344 Test: blockdev write zeroes read block ...passed 00:19:22.344 Test: blockdev write zeroes read no split ...passed 00:19:22.344 Test: blockdev write zeroes read split ...passed 00:19:22.601 Test: blockdev write zeroes read split partial ...passed 00:19:22.602 Test: blockdev reset ...[2024-07-23 03:18:48.930551] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.602 [2024-07-23 03:18:48.930668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1019f80 (9): Bad file descriptor 00:19:22.602 [2024-07-23 03:18:48.983325] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.602 passed 00:19:22.602 Test: blockdev write read 8 blocks ...passed 00:19:22.602 Test: blockdev write read size > 128k ...passed 00:19:22.602 Test: blockdev write read invalid size ...passed 00:19:22.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.602 Test: blockdev write read max offset ...passed 00:19:22.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.860 Test: blockdev writev readv 8 blocks ...passed 00:19:22.860 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.860 Test: blockdev writev readv block ...passed 00:19:22.860 Test: blockdev writev readv size > 128k ...passed 00:19:22.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.860 Test: blockdev comparev and writev ...[2024-07-23 03:18:49.281900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.860 [2024-07-23 03:18:49.281937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.281962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.860 [2024-07-23 03:18:49.281978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.282376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.860 [2024-07-23 03:18:49.282409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.282431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.860 [2024-07-23 03:18:49.282447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.282839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.860 [2024-07-23 03:18:49.282864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.282885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.860 [2024-07-23 03:18:49.282901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.283301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.860 [2024-07-23 03:18:49.283325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.283345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.860 [2024-07-23 03:18:49.283361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.860 passed 00:19:22.860 Test: blockdev nvme passthru rw ...passed 00:19:22.860 Test: blockdev nvme passthru vendor specific ...[2024-07-23 03:18:49.366960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.860 [2024-07-23 03:18:49.366988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.367181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.860 [2024-07-23 03:18:49.367204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.367394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.860 [2024-07-23 03:18:49.367417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.860 [2024-07-23 03:18:49.367607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.860 [2024-07-23 03:18:49.367639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.860 passed 00:19:22.860 Test: blockdev nvme admin passthru ...passed 00:19:22.860 Test: blockdev copy ...passed 00:19:22.860 00:19:22.860 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.860 suites 1 1 n/a 0 0 00:19:22.860 tests 23 23 23 0 0 00:19:22.860 asserts 152 152 152 0 n/a 00:19:22.860 00:19:22.860 Elapsed time = 1.396 seconds 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.119 rmmod nvme_tcp 00:19:23.119 rmmod nvme_fabrics 00:19:23.119 rmmod nvme_keyring 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 445465 ']' 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 445465 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 445465 ']' 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 445465 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:23.119 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 445465 00:19:23.377 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:23.378 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:23.378 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 445465' 00:19:23.378 killing process with pid 445465 00:19:23.378 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 445465 00:19:23.378 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 445465 00:19:23.638 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.638 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.638 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.638 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.638 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.638 03:18:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.638 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.638 03:18:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.545 03:18:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.545 00:19:25.545 real 0m6.290s 00:19:25.545 user 0m11.013s 00:19:25.545 sys 0m2.019s 00:19:25.545 03:18:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:25.545 03:18:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 ************************************ 00:19:25.545 END TEST nvmf_bdevio 00:19:25.545 ************************************ 00:19:25.545 03:18:52 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:25.545 03:18:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:25.545 03:18:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:25.545 03:18:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 ************************************ 00:19:25.545 START TEST nvmf_auth_target 00:19:25.545 ************************************ 00:19:25.545 03:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:25.804 * Looking for test storage... 00:19:25.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:25.804 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.805 03:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:27.711 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:27.711 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.711 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:27.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:27.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.712 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:27.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:19:27.971 00:19:27.971 --- 10.0.0.2 ping statistics --- 00:19:27.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.971 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:19:27.971 00:19:27.971 --- 10.0.0.1 ping statistics --- 00:19:27.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.971 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=447685 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 447685 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 447685 ']' 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:27.971 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=447705 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2b0e42d535cce99ca5e12ad5dd124e10aa9425e7266cf909 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:28.257 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UNN 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2b0e42d535cce99ca5e12ad5dd124e10aa9425e7266cf909 0 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2b0e42d535cce99ca5e12ad5dd124e10aa9425e7266cf909 0 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2b0e42d535cce99ca5e12ad5dd124e10aa9425e7266cf909 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UNN 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UNN 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.UNN 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=374cc05a85c94a3ed04b5d439438c0dfef39dff8e37360eb5c31d717af75a365 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xUH 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 374cc05a85c94a3ed04b5d439438c0dfef39dff8e37360eb5c31d717af75a365 3 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 374cc05a85c94a3ed04b5d439438c0dfef39dff8e37360eb5c31d717af75a365 3 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=374cc05a85c94a3ed04b5d439438c0dfef39dff8e37360eb5c31d717af75a365 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xUH 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xUH 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.xUH 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d66310e9d7dea83bbb87c2c6e5b02da9 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2On 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d66310e9d7dea83bbb87c2c6e5b02da9 1 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d66310e9d7dea83bbb87c2c6e5b02da9 1 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d66310e9d7dea83bbb87c2c6e5b02da9 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:28.258 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2On 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2On 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.2On 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d8912a73f54c0f53946062e0b73b0e02fc3cdec32ab122d4 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ApS 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d8912a73f54c0f53946062e0b73b0e02fc3cdec32ab122d4 2 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d8912a73f54c0f53946062e0b73b0e02fc3cdec32ab122d4 2 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d8912a73f54c0f53946062e0b73b0e02fc3cdec32ab122d4 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ApS 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ApS 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ApS 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dccaf268973054509d2cb332893bfd41524e679cb848a4a4 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NAG 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dccaf268973054509d2cb332893bfd41524e679cb848a4a4 2 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dccaf268973054509d2cb332893bfd41524e679cb848a4a4 2 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dccaf268973054509d2cb332893bfd41524e679cb848a4a4 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NAG 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NAG 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.NAG 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf14b97475365178333f74b40b67e49a 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UsF 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf14b97475365178333f74b40b67e49a 1 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf14b97475365178333f74b40b67e49a 1 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf14b97475365178333f74b40b67e49a 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UsF 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UsF 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.UsF 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=31534783217cb3d3c62d850cacda2ea601d1248e09a2d5b7c66067824af01e6c 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JSR 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 31534783217cb3d3c62d850cacda2ea601d1248e09a2d5b7c66067824af01e6c 3 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 31534783217cb3d3c62d850cacda2ea601d1248e09a2d5b7c66067824af01e6c 3 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=31534783217cb3d3c62d850cacda2ea601d1248e09a2d5b7c66067824af01e6c 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:28.517 03:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JSR 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JSR 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.JSR 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 447685 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 447685 ']' 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:28.517 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 447705 /var/tmp/host.sock 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 447705 ']' 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:28.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:28.775 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UNN 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UNN 00:19:29.033 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UNN 00:19:29.599 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.xUH ]] 00:19:29.599 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xUH 00:19:29.599 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.599 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.599 03:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.599 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xUH 00:19:29.599 03:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xUH 00:19:29.599 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:29.599 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2On 00:19:29.599 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.599 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.857 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.857 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2On 00:19:29.857 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2On 00:19:30.114 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ApS ]] 00:19:30.114 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ApS 00:19:30.114 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.114 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.114 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.114 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ApS 00:19:30.114 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ApS 00:19:30.372 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:30.372 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.NAG 00:19:30.372 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.372 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.372 03:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.372 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.NAG 00:19:30.372 03:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.NAG 00:19:30.629 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.UsF ]] 00:19:30.629 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UsF 00:19:30.629 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.629 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.630 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.630 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UsF 00:19:30.630 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UsF 00:19:30.887 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:30.887 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JSR 00:19:30.887 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.887 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.887 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.887 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JSR 00:19:30.887 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JSR 00:19:31.145 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:31.145 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:31.145 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.145 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.145 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.145 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.403 03:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.660 00:19:31.660 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.660 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.660 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.918 { 00:19:31.918 "cntlid": 1, 00:19:31.918 "qid": 0, 00:19:31.918 "state": "enabled", 00:19:31.918 "listen_address": { 00:19:31.918 "trtype": "TCP", 00:19:31.918 "adrfam": "IPv4", 00:19:31.918 "traddr": "10.0.0.2", 00:19:31.918 "trsvcid": "4420" 00:19:31.918 }, 00:19:31.918 "peer_address": { 00:19:31.918 "trtype": "TCP", 00:19:31.918 "adrfam": "IPv4", 00:19:31.918 "traddr": "10.0.0.1", 00:19:31.918 "trsvcid": "59092" 00:19:31.918 }, 00:19:31.918 "auth": { 00:19:31.918 "state": "completed", 00:19:31.918 "digest": "sha256", 00:19:31.918 "dhgroup": "null" 00:19:31.918 } 00:19:31.918 } 00:19:31.918 ]' 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.918 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.176 03:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:19:33.110 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.110 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.110 03:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.110 03:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.110 03:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.110 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.110 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:33.110 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.368 03:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.626 03:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.626 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.626 03:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.883 00:19:33.883 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.883 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.883 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.141 { 00:19:34.141 "cntlid": 3, 00:19:34.141 "qid": 0, 00:19:34.141 "state": "enabled", 00:19:34.141 "listen_address": { 00:19:34.141 "trtype": "TCP", 00:19:34.141 "adrfam": "IPv4", 00:19:34.141 "traddr": "10.0.0.2", 00:19:34.141 "trsvcid": "4420" 00:19:34.141 }, 00:19:34.141 "peer_address": { 00:19:34.141 "trtype": "TCP", 00:19:34.141 "adrfam": "IPv4", 00:19:34.141 "traddr": "10.0.0.1", 00:19:34.141 "trsvcid": "59124" 00:19:34.141 }, 00:19:34.141 "auth": { 00:19:34.141 "state": "completed", 00:19:34.141 "digest": "sha256", 00:19:34.141 "dhgroup": "null" 00:19:34.141 } 00:19:34.141 } 00:19:34.141 ]' 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.141 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.399 03:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:19:35.332 03:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.332 03:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.332 03:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.332 03:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.332 03:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.332 03:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.332 03:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.332 03:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.899 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.899 00:19:36.157 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.157 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.158 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.158 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.158 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.158 03:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.158 03:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.415 { 00:19:36.415 "cntlid": 5, 00:19:36.415 "qid": 0, 00:19:36.415 "state": "enabled", 00:19:36.415 "listen_address": { 00:19:36.415 "trtype": "TCP", 00:19:36.415 "adrfam": "IPv4", 00:19:36.415 "traddr": "10.0.0.2", 00:19:36.415 "trsvcid": "4420" 00:19:36.415 }, 00:19:36.415 "peer_address": { 00:19:36.415 "trtype": "TCP", 00:19:36.415 "adrfam": "IPv4", 00:19:36.415 "traddr": "10.0.0.1", 00:19:36.415 "trsvcid": "59148" 00:19:36.415 }, 00:19:36.415 "auth": { 00:19:36.415 "state": "completed", 00:19:36.415 "digest": "sha256", 00:19:36.415 "dhgroup": "null" 00:19:36.415 } 00:19:36.415 } 00:19:36.415 ]' 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.415 03:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.673 03:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:19:37.608 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.608 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.608 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.608 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.608 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.608 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.608 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.608 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.866 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.867 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.867 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.124 00:19:38.125 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.125 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.125 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.383 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.383 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.383 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.383 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.383 03:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.383 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.383 { 00:19:38.383 "cntlid": 7, 00:19:38.383 "qid": 0, 00:19:38.383 "state": "enabled", 00:19:38.383 "listen_address": { 00:19:38.383 "trtype": "TCP", 00:19:38.383 "adrfam": "IPv4", 00:19:38.383 "traddr": "10.0.0.2", 00:19:38.383 "trsvcid": "4420" 00:19:38.383 }, 00:19:38.383 "peer_address": { 00:19:38.383 "trtype": "TCP", 00:19:38.383 "adrfam": "IPv4", 00:19:38.383 "traddr": "10.0.0.1", 00:19:38.383 "trsvcid": "59186" 00:19:38.383 }, 00:19:38.383 "auth": { 00:19:38.383 "state": "completed", 00:19:38.383 "digest": "sha256", 00:19:38.383 "dhgroup": "null" 00:19:38.383 } 00:19:38.383 } 00:19:38.383 ]' 00:19:38.383 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.641 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.641 03:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.641 03:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:38.641 03:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.641 03:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.641 03:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.641 03:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.900 03:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.833 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.091 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.657 00:19:40.657 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.657 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.657 03:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.657 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.657 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.657 03:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.657 03:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.657 03:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.657 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.657 { 00:19:40.657 "cntlid": 9, 00:19:40.657 "qid": 0, 00:19:40.657 "state": "enabled", 00:19:40.657 "listen_address": { 00:19:40.657 "trtype": "TCP", 00:19:40.657 "adrfam": "IPv4", 00:19:40.657 "traddr": "10.0.0.2", 00:19:40.657 "trsvcid": "4420" 00:19:40.657 }, 00:19:40.657 "peer_address": { 00:19:40.657 "trtype": "TCP", 00:19:40.657 "adrfam": "IPv4", 00:19:40.657 "traddr": "10.0.0.1", 00:19:40.657 "trsvcid": "38888" 00:19:40.657 }, 00:19:40.657 "auth": { 00:19:40.657 "state": "completed", 00:19:40.657 "digest": "sha256", 00:19:40.657 "dhgroup": "ffdhe2048" 00:19:40.657 } 00:19:40.657 } 00:19:40.657 ]' 00:19:40.657 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.916 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.916 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.916 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.916 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.916 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.916 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.916 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.174 03:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:19:42.108 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.108 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.108 03:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.108 03:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.108 03:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.108 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.108 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.108 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.366 03:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.934 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.934 { 00:19:42.934 "cntlid": 11, 00:19:42.934 "qid": 0, 00:19:42.934 "state": "enabled", 00:19:42.934 "listen_address": { 00:19:42.934 "trtype": "TCP", 00:19:42.934 "adrfam": "IPv4", 00:19:42.934 "traddr": "10.0.0.2", 00:19:42.934 "trsvcid": "4420" 00:19:42.934 }, 00:19:42.934 "peer_address": { 00:19:42.934 "trtype": "TCP", 00:19:42.934 "adrfam": "IPv4", 00:19:42.934 "traddr": "10.0.0.1", 00:19:42.934 "trsvcid": "38920" 00:19:42.934 }, 00:19:42.934 "auth": { 00:19:42.934 "state": "completed", 00:19:42.934 "digest": "sha256", 00:19:42.934 "dhgroup": "ffdhe2048" 00:19:42.934 } 00:19:42.934 } 00:19:42.934 ]' 00:19:42.934 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.228 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.228 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.228 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.228 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.228 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.228 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.228 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.487 03:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:19:44.420 03:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.420 03:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.420 03:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.420 03:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.420 03:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.420 03:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.420 03:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.420 03:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.679 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.937 00:19:44.937 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.937 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.937 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.195 { 00:19:45.195 "cntlid": 13, 00:19:45.195 "qid": 0, 00:19:45.195 "state": "enabled", 00:19:45.195 "listen_address": { 00:19:45.195 "trtype": "TCP", 00:19:45.195 "adrfam": "IPv4", 00:19:45.195 "traddr": "10.0.0.2", 00:19:45.195 "trsvcid": "4420" 00:19:45.195 }, 00:19:45.195 "peer_address": { 00:19:45.195 "trtype": "TCP", 00:19:45.195 "adrfam": "IPv4", 00:19:45.195 "traddr": "10.0.0.1", 00:19:45.195 "trsvcid": "38932" 00:19:45.195 }, 00:19:45.195 "auth": { 00:19:45.195 "state": "completed", 00:19:45.195 "digest": "sha256", 00:19:45.195 "dhgroup": "ffdhe2048" 00:19:45.195 } 00:19:45.195 } 00:19:45.195 ]' 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.195 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.453 03:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:19:46.386 03:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.386 03:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.386 03:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.386 03:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.386 03:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.644 03:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.644 03:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.644 03:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.644 03:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.900 03:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.900 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.900 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.157 00:19:47.157 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.157 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.157 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.415 { 00:19:47.415 "cntlid": 15, 00:19:47.415 "qid": 0, 00:19:47.415 "state": "enabled", 00:19:47.415 "listen_address": { 00:19:47.415 "trtype": "TCP", 00:19:47.415 "adrfam": "IPv4", 00:19:47.415 "traddr": "10.0.0.2", 00:19:47.415 "trsvcid": "4420" 00:19:47.415 }, 00:19:47.415 "peer_address": { 00:19:47.415 "trtype": "TCP", 00:19:47.415 "adrfam": "IPv4", 00:19:47.415 "traddr": "10.0.0.1", 00:19:47.415 "trsvcid": "38970" 00:19:47.415 }, 00:19:47.415 "auth": { 00:19:47.415 "state": "completed", 00:19:47.415 "digest": "sha256", 00:19:47.415 "dhgroup": "ffdhe2048" 00:19:47.415 } 00:19:47.415 } 00:19:47.415 ]' 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.415 03:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.673 03:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.608 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.866 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.432 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.432 { 00:19:49.432 "cntlid": 17, 00:19:49.432 "qid": 0, 00:19:49.432 "state": "enabled", 00:19:49.432 "listen_address": { 00:19:49.432 "trtype": "TCP", 00:19:49.432 "adrfam": "IPv4", 00:19:49.432 "traddr": "10.0.0.2", 00:19:49.432 "trsvcid": "4420" 00:19:49.432 }, 00:19:49.432 "peer_address": { 00:19:49.432 "trtype": "TCP", 00:19:49.432 "adrfam": "IPv4", 00:19:49.432 "traddr": "10.0.0.1", 00:19:49.432 "trsvcid": "38992" 00:19:49.432 }, 00:19:49.432 "auth": { 00:19:49.432 "state": "completed", 00:19:49.432 "digest": "sha256", 00:19:49.432 "dhgroup": "ffdhe3072" 00:19:49.432 } 00:19:49.432 } 00:19:49.432 ]' 00:19:49.432 03:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.690 03:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.690 03:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.690 03:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.690 03:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.690 03:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.690 03:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.690 03:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.948 03:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:19:50.881 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.881 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.881 03:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.881 03:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.881 03:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.881 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.881 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.881 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.140 03:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.705 00:19:51.705 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.705 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.705 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.705 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.962 { 00:19:51.962 "cntlid": 19, 00:19:51.962 "qid": 0, 00:19:51.962 "state": "enabled", 00:19:51.962 "listen_address": { 00:19:51.962 "trtype": "TCP", 00:19:51.962 "adrfam": "IPv4", 00:19:51.962 "traddr": "10.0.0.2", 00:19:51.962 "trsvcid": "4420" 00:19:51.962 }, 00:19:51.962 "peer_address": { 00:19:51.962 "trtype": "TCP", 00:19:51.962 "adrfam": "IPv4", 00:19:51.962 "traddr": "10.0.0.1", 00:19:51.962 "trsvcid": "32876" 00:19:51.962 }, 00:19:51.962 "auth": { 00:19:51.962 "state": "completed", 00:19:51.962 "digest": "sha256", 00:19:51.962 "dhgroup": "ffdhe3072" 00:19:51.962 } 00:19:51.962 } 00:19:51.962 ]' 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.962 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.219 03:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:19:53.151 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.151 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.151 03:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.151 03:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.151 03:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.151 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.151 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.151 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.410 03:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.976 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.976 { 00:19:53.976 "cntlid": 21, 00:19:53.976 "qid": 0, 00:19:53.976 "state": "enabled", 00:19:53.976 "listen_address": { 00:19:53.976 "trtype": "TCP", 00:19:53.976 "adrfam": "IPv4", 00:19:53.976 "traddr": "10.0.0.2", 00:19:53.976 "trsvcid": "4420" 00:19:53.976 }, 00:19:53.976 "peer_address": { 00:19:53.976 "trtype": "TCP", 00:19:53.976 "adrfam": "IPv4", 00:19:53.976 "traddr": "10.0.0.1", 00:19:53.976 "trsvcid": "32900" 00:19:53.976 }, 00:19:53.976 "auth": { 00:19:53.976 "state": "completed", 00:19:53.976 "digest": "sha256", 00:19:53.976 "dhgroup": "ffdhe3072" 00:19:53.976 } 00:19:53.976 } 00:19:53.976 ]' 00:19:53.976 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.233 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.233 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.233 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.233 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.233 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.233 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.233 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.490 03:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:19:55.424 03:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.424 03:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.424 03:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.424 03:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.424 03:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.424 03:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.424 03:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.424 03:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.682 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.940 00:19:55.940 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.940 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.940 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.198 { 00:19:56.198 "cntlid": 23, 00:19:56.198 "qid": 0, 00:19:56.198 "state": "enabled", 00:19:56.198 "listen_address": { 00:19:56.198 "trtype": "TCP", 00:19:56.198 "adrfam": "IPv4", 00:19:56.198 "traddr": "10.0.0.2", 00:19:56.198 "trsvcid": "4420" 00:19:56.198 }, 00:19:56.198 "peer_address": { 00:19:56.198 "trtype": "TCP", 00:19:56.198 "adrfam": "IPv4", 00:19:56.198 "traddr": "10.0.0.1", 00:19:56.198 "trsvcid": "32922" 00:19:56.198 }, 00:19:56.198 "auth": { 00:19:56.198 "state": "completed", 00:19:56.198 "digest": "sha256", 00:19:56.198 "dhgroup": "ffdhe3072" 00:19:56.198 } 00:19:56.198 } 00:19:56.198 ]' 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.198 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.456 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.456 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.456 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.456 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.456 03:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.714 03:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.646 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.905 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.217 00:19:58.217 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.217 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.217 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.475 { 00:19:58.475 "cntlid": 25, 00:19:58.475 "qid": 0, 00:19:58.475 "state": "enabled", 00:19:58.475 "listen_address": { 00:19:58.475 "trtype": "TCP", 00:19:58.475 "adrfam": "IPv4", 00:19:58.475 "traddr": "10.0.0.2", 00:19:58.475 "trsvcid": "4420" 00:19:58.475 }, 00:19:58.475 "peer_address": { 00:19:58.475 "trtype": "TCP", 00:19:58.475 "adrfam": "IPv4", 00:19:58.475 "traddr": "10.0.0.1", 00:19:58.475 "trsvcid": "32948" 00:19:58.475 }, 00:19:58.475 "auth": { 00:19:58.475 "state": "completed", 00:19:58.475 "digest": "sha256", 00:19:58.475 "dhgroup": "ffdhe4096" 00:19:58.475 } 00:19:58.475 } 00:19:58.475 ]' 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.475 03:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.475 03:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.475 03:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.733 03:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.733 03:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.733 03:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.733 03:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.107 03:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.673 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.673 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.673 { 00:20:00.673 "cntlid": 27, 00:20:00.673 "qid": 0, 00:20:00.673 "state": "enabled", 00:20:00.673 "listen_address": { 00:20:00.673 "trtype": "TCP", 00:20:00.673 "adrfam": "IPv4", 00:20:00.673 "traddr": "10.0.0.2", 00:20:00.673 "trsvcid": "4420" 00:20:00.673 }, 00:20:00.673 "peer_address": { 00:20:00.673 "trtype": "TCP", 00:20:00.673 "adrfam": "IPv4", 00:20:00.673 "traddr": "10.0.0.1", 00:20:00.673 "trsvcid": "49786" 00:20:00.673 }, 00:20:00.673 "auth": { 00:20:00.673 "state": "completed", 00:20:00.673 "digest": "sha256", 00:20:00.673 "dhgroup": "ffdhe4096" 00:20:00.673 } 00:20:00.673 } 00:20:00.673 ]' 00:20:00.931 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.931 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.931 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.931 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.931 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.931 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.931 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.931 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.189 03:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:20:02.121 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.121 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.121 03:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.121 03:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.121 03:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.121 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.121 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.121 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.380 03:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.638 00:20:02.896 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.896 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.896 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.896 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.896 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.896 03:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.154 { 00:20:03.154 "cntlid": 29, 00:20:03.154 "qid": 0, 00:20:03.154 "state": "enabled", 00:20:03.154 "listen_address": { 00:20:03.154 "trtype": "TCP", 00:20:03.154 "adrfam": "IPv4", 00:20:03.154 "traddr": "10.0.0.2", 00:20:03.154 "trsvcid": "4420" 00:20:03.154 }, 00:20:03.154 "peer_address": { 00:20:03.154 "trtype": "TCP", 00:20:03.154 "adrfam": "IPv4", 00:20:03.154 "traddr": "10.0.0.1", 00:20:03.154 "trsvcid": "49812" 00:20:03.154 }, 00:20:03.154 "auth": { 00:20:03.154 "state": "completed", 00:20:03.154 "digest": "sha256", 00:20:03.154 "dhgroup": "ffdhe4096" 00:20:03.154 } 00:20:03.154 } 00:20:03.154 ]' 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.154 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.412 03:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:20:04.346 03:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.346 03:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.346 03:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.346 03:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.346 03:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.346 03:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.346 03:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:04.346 03:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.912 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.170 00:20:05.170 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.170 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.170 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.428 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.428 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.428 03:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.428 03:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.428 03:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.428 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.428 { 00:20:05.428 "cntlid": 31, 00:20:05.428 "qid": 0, 00:20:05.428 "state": "enabled", 00:20:05.428 "listen_address": { 00:20:05.428 "trtype": "TCP", 00:20:05.428 "adrfam": "IPv4", 00:20:05.428 "traddr": "10.0.0.2", 00:20:05.428 "trsvcid": "4420" 00:20:05.428 }, 00:20:05.428 "peer_address": { 00:20:05.428 "trtype": "TCP", 00:20:05.428 "adrfam": "IPv4", 00:20:05.428 "traddr": "10.0.0.1", 00:20:05.428 "trsvcid": "49832" 00:20:05.428 }, 00:20:05.429 "auth": { 00:20:05.429 "state": "completed", 00:20:05.429 "digest": "sha256", 00:20:05.429 "dhgroup": "ffdhe4096" 00:20:05.429 } 00:20:05.429 } 00:20:05.429 ]' 00:20:05.429 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.429 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.429 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.429 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.429 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.429 03:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.429 03:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.429 03:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.687 03:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.060 03:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.626 00:20:07.626 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.626 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.626 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.884 { 00:20:07.884 "cntlid": 33, 00:20:07.884 "qid": 0, 00:20:07.884 "state": "enabled", 00:20:07.884 "listen_address": { 00:20:07.884 "trtype": "TCP", 00:20:07.884 "adrfam": "IPv4", 00:20:07.884 "traddr": "10.0.0.2", 00:20:07.884 "trsvcid": "4420" 00:20:07.884 }, 00:20:07.884 "peer_address": { 00:20:07.884 "trtype": "TCP", 00:20:07.884 "adrfam": "IPv4", 00:20:07.884 "traddr": "10.0.0.1", 00:20:07.884 "trsvcid": "49864" 00:20:07.884 }, 00:20:07.884 "auth": { 00:20:07.884 "state": "completed", 00:20:07.884 "digest": "sha256", 00:20:07.884 "dhgroup": "ffdhe6144" 00:20:07.884 } 00:20:07.884 } 00:20:07.884 ]' 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.884 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.142 03:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.516 03:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.081 00:20:10.081 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.081 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.081 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.339 { 00:20:10.339 "cntlid": 35, 00:20:10.339 "qid": 0, 00:20:10.339 "state": "enabled", 00:20:10.339 "listen_address": { 00:20:10.339 "trtype": "TCP", 00:20:10.339 "adrfam": "IPv4", 00:20:10.339 "traddr": "10.0.0.2", 00:20:10.339 "trsvcid": "4420" 00:20:10.339 }, 00:20:10.339 "peer_address": { 00:20:10.339 "trtype": "TCP", 00:20:10.339 "adrfam": "IPv4", 00:20:10.339 "traddr": "10.0.0.1", 00:20:10.339 "trsvcid": "49890" 00:20:10.339 }, 00:20:10.339 "auth": { 00:20:10.339 "state": "completed", 00:20:10.339 "digest": "sha256", 00:20:10.339 "dhgroup": "ffdhe6144" 00:20:10.339 } 00:20:10.339 } 00:20:10.339 ]' 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.339 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.596 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.596 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.596 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.596 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.596 03:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.853 03:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:20:11.784 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.784 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.784 03:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.784 03:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.784 03:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.784 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.784 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.784 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.042 03:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.607 00:20:12.607 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.607 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.607 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.864 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.864 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.864 03:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.864 03:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.864 03:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.864 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.864 { 00:20:12.864 "cntlid": 37, 00:20:12.864 "qid": 0, 00:20:12.864 "state": "enabled", 00:20:12.864 "listen_address": { 00:20:12.864 "trtype": "TCP", 00:20:12.864 "adrfam": "IPv4", 00:20:12.864 "traddr": "10.0.0.2", 00:20:12.864 "trsvcid": "4420" 00:20:12.864 }, 00:20:12.864 "peer_address": { 00:20:12.864 "trtype": "TCP", 00:20:12.864 "adrfam": "IPv4", 00:20:12.864 "traddr": "10.0.0.1", 00:20:12.864 "trsvcid": "48296" 00:20:12.864 }, 00:20:12.864 "auth": { 00:20:12.864 "state": "completed", 00:20:12.864 "digest": "sha256", 00:20:12.864 "dhgroup": "ffdhe6144" 00:20:12.864 } 00:20:12.864 } 00:20:12.864 ]' 00:20:12.864 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.865 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.865 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.865 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.865 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.865 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.865 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.865 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.121 03:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:20:14.100 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.358 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.359 03:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.924 00:20:14.924 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.924 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.924 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.181 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.181 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.181 03:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.181 03:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.181 03:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.181 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.181 { 00:20:15.181 "cntlid": 39, 00:20:15.181 "qid": 0, 00:20:15.181 "state": "enabled", 00:20:15.181 "listen_address": { 00:20:15.181 "trtype": "TCP", 00:20:15.181 "adrfam": "IPv4", 00:20:15.181 "traddr": "10.0.0.2", 00:20:15.181 "trsvcid": "4420" 00:20:15.181 }, 00:20:15.181 "peer_address": { 00:20:15.181 "trtype": "TCP", 00:20:15.181 "adrfam": "IPv4", 00:20:15.181 "traddr": "10.0.0.1", 00:20:15.181 "trsvcid": "48326" 00:20:15.181 }, 00:20:15.181 "auth": { 00:20:15.182 "state": "completed", 00:20:15.182 "digest": "sha256", 00:20:15.182 "dhgroup": "ffdhe6144" 00:20:15.182 } 00:20:15.182 } 00:20:15.182 ]' 00:20:15.182 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.439 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.439 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.439 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.439 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.439 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.439 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.439 03:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.697 03:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.629 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.887 03:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.820 00:20:17.820 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.820 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.820 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.078 { 00:20:18.078 "cntlid": 41, 00:20:18.078 "qid": 0, 00:20:18.078 "state": "enabled", 00:20:18.078 "listen_address": { 00:20:18.078 "trtype": "TCP", 00:20:18.078 "adrfam": "IPv4", 00:20:18.078 "traddr": "10.0.0.2", 00:20:18.078 "trsvcid": "4420" 00:20:18.078 }, 00:20:18.078 "peer_address": { 00:20:18.078 "trtype": "TCP", 00:20:18.078 "adrfam": "IPv4", 00:20:18.078 "traddr": "10.0.0.1", 00:20:18.078 "trsvcid": "48348" 00:20:18.078 }, 00:20:18.078 "auth": { 00:20:18.078 "state": "completed", 00:20:18.078 "digest": "sha256", 00:20:18.078 "dhgroup": "ffdhe8192" 00:20:18.078 } 00:20:18.078 } 00:20:18.078 ]' 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.078 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.336 03:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:20:19.270 03:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.270 03:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.270 03:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.270 03:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.270 03:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.270 03:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.270 03:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:19.270 03:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.528 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.462 00:20:20.462 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.462 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.462 03:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.720 { 00:20:20.720 "cntlid": 43, 00:20:20.720 "qid": 0, 00:20:20.720 "state": "enabled", 00:20:20.720 "listen_address": { 00:20:20.720 "trtype": "TCP", 00:20:20.720 "adrfam": "IPv4", 00:20:20.720 "traddr": "10.0.0.2", 00:20:20.720 "trsvcid": "4420" 00:20:20.720 }, 00:20:20.720 "peer_address": { 00:20:20.720 "trtype": "TCP", 00:20:20.720 "adrfam": "IPv4", 00:20:20.720 "traddr": "10.0.0.1", 00:20:20.720 "trsvcid": "48372" 00:20:20.720 }, 00:20:20.720 "auth": { 00:20:20.720 "state": "completed", 00:20:20.720 "digest": "sha256", 00:20:20.720 "dhgroup": "ffdhe8192" 00:20:20.720 } 00:20:20.720 } 00:20:20.720 ]' 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.720 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.978 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.978 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.978 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.978 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.978 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.236 03:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:20:22.170 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.170 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.170 03:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.170 03:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.170 03:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.170 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.170 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.170 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.428 03:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.361 00:20:23.361 03:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.361 03:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.361 03:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.619 { 00:20:23.619 "cntlid": 45, 00:20:23.619 "qid": 0, 00:20:23.619 "state": "enabled", 00:20:23.619 "listen_address": { 00:20:23.619 "trtype": "TCP", 00:20:23.619 "adrfam": "IPv4", 00:20:23.619 "traddr": "10.0.0.2", 00:20:23.619 "trsvcid": "4420" 00:20:23.619 }, 00:20:23.619 "peer_address": { 00:20:23.619 "trtype": "TCP", 00:20:23.619 "adrfam": "IPv4", 00:20:23.619 "traddr": "10.0.0.1", 00:20:23.619 "trsvcid": "37970" 00:20:23.619 }, 00:20:23.619 "auth": { 00:20:23.619 "state": "completed", 00:20:23.619 "digest": "sha256", 00:20:23.619 "dhgroup": "ffdhe8192" 00:20:23.619 } 00:20:23.619 } 00:20:23.619 ]' 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.619 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.876 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.876 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.876 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.876 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.134 03:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:20:25.067 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.067 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.067 03:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.067 03:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.067 03:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.067 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.067 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.067 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.325 03:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.259 00:20:26.259 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.259 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.259 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.518 { 00:20:26.518 "cntlid": 47, 00:20:26.518 "qid": 0, 00:20:26.518 "state": "enabled", 00:20:26.518 "listen_address": { 00:20:26.518 "trtype": "TCP", 00:20:26.518 "adrfam": "IPv4", 00:20:26.518 "traddr": "10.0.0.2", 00:20:26.518 "trsvcid": "4420" 00:20:26.518 }, 00:20:26.518 "peer_address": { 00:20:26.518 "trtype": "TCP", 00:20:26.518 "adrfam": "IPv4", 00:20:26.518 "traddr": "10.0.0.1", 00:20:26.518 "trsvcid": "38002" 00:20:26.518 }, 00:20:26.518 "auth": { 00:20:26.518 "state": "completed", 00:20:26.518 "digest": "sha256", 00:20:26.518 "dhgroup": "ffdhe8192" 00:20:26.518 } 00:20:26.518 } 00:20:26.518 ]' 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.518 03:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.518 03:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.518 03:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.518 03:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.776 03:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.710 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.968 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.534 00:20:28.534 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.534 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.534 03:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.534 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.534 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.534 03:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.534 03:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.534 03:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.534 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.534 { 00:20:28.534 "cntlid": 49, 00:20:28.534 "qid": 0, 00:20:28.534 "state": "enabled", 00:20:28.534 "listen_address": { 00:20:28.534 "trtype": "TCP", 00:20:28.534 "adrfam": "IPv4", 00:20:28.534 "traddr": "10.0.0.2", 00:20:28.534 "trsvcid": "4420" 00:20:28.534 }, 00:20:28.534 "peer_address": { 00:20:28.534 "trtype": "TCP", 00:20:28.534 "adrfam": "IPv4", 00:20:28.534 "traddr": "10.0.0.1", 00:20:28.534 "trsvcid": "38028" 00:20:28.534 }, 00:20:28.534 "auth": { 00:20:28.534 "state": "completed", 00:20:28.534 "digest": "sha384", 00:20:28.534 "dhgroup": "null" 00:20:28.534 } 00:20:28.534 } 00:20:28.534 ]' 00:20:28.534 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.792 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.792 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.792 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:28.792 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.792 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.792 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.792 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.050 03:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:20:30.019 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.019 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.019 03:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.019 03:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.019 03:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.019 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.019 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.019 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.277 03:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.534 00:20:30.534 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.534 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.534 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.791 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.791 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.791 03:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.791 03:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.791 03:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.791 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.791 { 00:20:30.791 "cntlid": 51, 00:20:30.791 "qid": 0, 00:20:30.791 "state": "enabled", 00:20:30.791 "listen_address": { 00:20:30.791 "trtype": "TCP", 00:20:30.791 "adrfam": "IPv4", 00:20:30.791 "traddr": "10.0.0.2", 00:20:30.791 "trsvcid": "4420" 00:20:30.791 }, 00:20:30.791 "peer_address": { 00:20:30.791 "trtype": "TCP", 00:20:30.791 "adrfam": "IPv4", 00:20:30.791 "traddr": "10.0.0.1", 00:20:30.791 "trsvcid": "48484" 00:20:30.791 }, 00:20:30.791 "auth": { 00:20:30.791 "state": "completed", 00:20:30.791 "digest": "sha384", 00:20:30.791 "dhgroup": "null" 00:20:30.791 } 00:20:30.791 } 00:20:30.791 ]' 00:20:30.791 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.048 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.048 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.048 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:31.048 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.048 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.048 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.048 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.305 03:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:20:32.237 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.237 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.237 03:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.237 03:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.237 03:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.237 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.237 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.237 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.494 03:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.752 00:20:32.752 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.752 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.752 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.009 { 00:20:33.009 "cntlid": 53, 00:20:33.009 "qid": 0, 00:20:33.009 "state": "enabled", 00:20:33.009 "listen_address": { 00:20:33.009 "trtype": "TCP", 00:20:33.009 "adrfam": "IPv4", 00:20:33.009 "traddr": "10.0.0.2", 00:20:33.009 "trsvcid": "4420" 00:20:33.009 }, 00:20:33.009 "peer_address": { 00:20:33.009 "trtype": "TCP", 00:20:33.009 "adrfam": "IPv4", 00:20:33.009 "traddr": "10.0.0.1", 00:20:33.009 "trsvcid": "48510" 00:20:33.009 }, 00:20:33.009 "auth": { 00:20:33.009 "state": "completed", 00:20:33.009 "digest": "sha384", 00:20:33.009 "dhgroup": "null" 00:20:33.009 } 00:20:33.009 } 00:20:33.009 ]' 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.009 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.267 03:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:20:34.199 03:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.199 03:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.199 03:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.199 03:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.199 03:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.199 03:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.199 03:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.199 03:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.457 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.021 00:20:35.021 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.021 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.021 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.021 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.021 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.021 03:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.021 03:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.279 { 00:20:35.279 "cntlid": 55, 00:20:35.279 "qid": 0, 00:20:35.279 "state": "enabled", 00:20:35.279 "listen_address": { 00:20:35.279 "trtype": "TCP", 00:20:35.279 "adrfam": "IPv4", 00:20:35.279 "traddr": "10.0.0.2", 00:20:35.279 "trsvcid": "4420" 00:20:35.279 }, 00:20:35.279 "peer_address": { 00:20:35.279 "trtype": "TCP", 00:20:35.279 "adrfam": "IPv4", 00:20:35.279 "traddr": "10.0.0.1", 00:20:35.279 "trsvcid": "48538" 00:20:35.279 }, 00:20:35.279 "auth": { 00:20:35.279 "state": "completed", 00:20:35.279 "digest": "sha384", 00:20:35.279 "dhgroup": "null" 00:20:35.279 } 00:20:35.279 } 00:20:35.279 ]' 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.279 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.537 03:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.470 03:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.728 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.985 00:20:36.985 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.985 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.985 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.242 { 00:20:37.242 "cntlid": 57, 00:20:37.242 "qid": 0, 00:20:37.242 "state": "enabled", 00:20:37.242 "listen_address": { 00:20:37.242 "trtype": "TCP", 00:20:37.242 "adrfam": "IPv4", 00:20:37.242 "traddr": "10.0.0.2", 00:20:37.242 "trsvcid": "4420" 00:20:37.242 }, 00:20:37.242 "peer_address": { 00:20:37.242 "trtype": "TCP", 00:20:37.242 "adrfam": "IPv4", 00:20:37.242 "traddr": "10.0.0.1", 00:20:37.242 "trsvcid": "48558" 00:20:37.242 }, 00:20:37.242 "auth": { 00:20:37.242 "state": "completed", 00:20:37.242 "digest": "sha384", 00:20:37.242 "dhgroup": "ffdhe2048" 00:20:37.242 } 00:20:37.242 } 00:20:37.242 ]' 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.242 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.500 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.500 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.500 03:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.500 03:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:20:38.433 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.691 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.691 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.691 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.691 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.691 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.691 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.691 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.948 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:38.948 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.949 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.207 00:20:39.207 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.207 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.207 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.465 { 00:20:39.465 "cntlid": 59, 00:20:39.465 "qid": 0, 00:20:39.465 "state": "enabled", 00:20:39.465 "listen_address": { 00:20:39.465 "trtype": "TCP", 00:20:39.465 "adrfam": "IPv4", 00:20:39.465 "traddr": "10.0.0.2", 00:20:39.465 "trsvcid": "4420" 00:20:39.465 }, 00:20:39.465 "peer_address": { 00:20:39.465 "trtype": "TCP", 00:20:39.465 "adrfam": "IPv4", 00:20:39.465 "traddr": "10.0.0.1", 00:20:39.465 "trsvcid": "48576" 00:20:39.465 }, 00:20:39.465 "auth": { 00:20:39.465 "state": "completed", 00:20:39.465 "digest": "sha384", 00:20:39.465 "dhgroup": "ffdhe2048" 00:20:39.465 } 00:20:39.465 } 00:20:39.465 ]' 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.465 03:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.723 03:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:20:40.657 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.657 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.657 03:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.657 03:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.657 03:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.657 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.657 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.657 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.915 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:40.915 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.915 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.915 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.915 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.915 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.915 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.916 03:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.916 03:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.916 03:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.916 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.916 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.481 00:20:41.481 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.481 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.481 03:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.739 { 00:20:41.739 "cntlid": 61, 00:20:41.739 "qid": 0, 00:20:41.739 "state": "enabled", 00:20:41.739 "listen_address": { 00:20:41.739 "trtype": "TCP", 00:20:41.739 "adrfam": "IPv4", 00:20:41.739 "traddr": "10.0.0.2", 00:20:41.739 "trsvcid": "4420" 00:20:41.739 }, 00:20:41.739 "peer_address": { 00:20:41.739 "trtype": "TCP", 00:20:41.739 "adrfam": "IPv4", 00:20:41.739 "traddr": "10.0.0.1", 00:20:41.739 "trsvcid": "57840" 00:20:41.739 }, 00:20:41.739 "auth": { 00:20:41.739 "state": "completed", 00:20:41.739 "digest": "sha384", 00:20:41.739 "dhgroup": "ffdhe2048" 00:20:41.739 } 00:20:41.739 } 00:20:41.739 ]' 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.739 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.998 03:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:20:42.931 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.931 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.931 03:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.931 03:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.931 03:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.931 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.931 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.931 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.189 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.447 00:20:43.447 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.447 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.447 03:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.706 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.706 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.706 03:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.706 03:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.706 03:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.706 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.706 { 00:20:43.706 "cntlid": 63, 00:20:43.706 "qid": 0, 00:20:43.706 "state": "enabled", 00:20:43.706 "listen_address": { 00:20:43.706 "trtype": "TCP", 00:20:43.706 "adrfam": "IPv4", 00:20:43.706 "traddr": "10.0.0.2", 00:20:43.706 "trsvcid": "4420" 00:20:43.706 }, 00:20:43.706 "peer_address": { 00:20:43.706 "trtype": "TCP", 00:20:43.706 "adrfam": "IPv4", 00:20:43.706 "traddr": "10.0.0.1", 00:20:43.706 "trsvcid": "57868" 00:20:43.706 }, 00:20:43.706 "auth": { 00:20:43.706 "state": "completed", 00:20:43.706 "digest": "sha384", 00:20:43.706 "dhgroup": "ffdhe2048" 00:20:43.706 } 00:20:43.706 } 00:20:43.706 ]' 00:20:43.706 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.706 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.964 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.964 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.964 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.964 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.964 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.964 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.231 03:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.217 03:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.475 03:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.475 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.475 03:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.732 00:20:45.732 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.732 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.732 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.990 { 00:20:45.990 "cntlid": 65, 00:20:45.990 "qid": 0, 00:20:45.990 "state": "enabled", 00:20:45.990 "listen_address": { 00:20:45.990 "trtype": "TCP", 00:20:45.990 "adrfam": "IPv4", 00:20:45.990 "traddr": "10.0.0.2", 00:20:45.990 "trsvcid": "4420" 00:20:45.990 }, 00:20:45.990 "peer_address": { 00:20:45.990 "trtype": "TCP", 00:20:45.990 "adrfam": "IPv4", 00:20:45.990 "traddr": "10.0.0.1", 00:20:45.990 "trsvcid": "57912" 00:20:45.990 }, 00:20:45.990 "auth": { 00:20:45.990 "state": "completed", 00:20:45.990 "digest": "sha384", 00:20:45.990 "dhgroup": "ffdhe3072" 00:20:45.990 } 00:20:45.990 } 00:20:45.990 ]' 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.990 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.991 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.991 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.991 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.991 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.991 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.554 03:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:20:47.487 03:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.487 03:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.487 03:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.487 03:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.487 03:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.487 03:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.487 03:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:47.487 03:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.744 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.001 00:20:48.001 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.001 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.001 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.259 { 00:20:48.259 "cntlid": 67, 00:20:48.259 "qid": 0, 00:20:48.259 "state": "enabled", 00:20:48.259 "listen_address": { 00:20:48.259 "trtype": "TCP", 00:20:48.259 "adrfam": "IPv4", 00:20:48.259 "traddr": "10.0.0.2", 00:20:48.259 "trsvcid": "4420" 00:20:48.259 }, 00:20:48.259 "peer_address": { 00:20:48.259 "trtype": "TCP", 00:20:48.259 "adrfam": "IPv4", 00:20:48.259 "traddr": "10.0.0.1", 00:20:48.259 "trsvcid": "57956" 00:20:48.259 }, 00:20:48.259 "auth": { 00:20:48.259 "state": "completed", 00:20:48.259 "digest": "sha384", 00:20:48.259 "dhgroup": "ffdhe3072" 00:20:48.259 } 00:20:48.259 } 00:20:48.259 ]' 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.259 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.516 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.516 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.516 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.516 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.516 03:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.773 03:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:20:49.705 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.705 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.705 03:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.705 03:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.705 03:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.705 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.705 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.705 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.963 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.220 00:20:50.220 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.220 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.220 03:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.477 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.477 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.477 03:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.477 03:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.477 03:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.477 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.477 { 00:20:50.477 "cntlid": 69, 00:20:50.477 "qid": 0, 00:20:50.477 "state": "enabled", 00:20:50.477 "listen_address": { 00:20:50.477 "trtype": "TCP", 00:20:50.477 "adrfam": "IPv4", 00:20:50.477 "traddr": "10.0.0.2", 00:20:50.477 "trsvcid": "4420" 00:20:50.477 }, 00:20:50.477 "peer_address": { 00:20:50.477 "trtype": "TCP", 00:20:50.477 "adrfam": "IPv4", 00:20:50.477 "traddr": "10.0.0.1", 00:20:50.477 "trsvcid": "56352" 00:20:50.477 }, 00:20:50.477 "auth": { 00:20:50.477 "state": "completed", 00:20:50.477 "digest": "sha384", 00:20:50.477 "dhgroup": "ffdhe3072" 00:20:50.477 } 00:20:50.477 } 00:20:50.477 ]' 00:20:50.477 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.735 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.735 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.735 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:50.735 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.735 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.735 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.735 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.993 03:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:20:51.924 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.924 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.924 03:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.924 03:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.924 03:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.924 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.924 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.924 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.181 03:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.438 00:20:52.438 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.438 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.438 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.696 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.696 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.696 03:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.696 03:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.696 03:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.696 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.696 { 00:20:52.696 "cntlid": 71, 00:20:52.696 "qid": 0, 00:20:52.696 "state": "enabled", 00:20:52.696 "listen_address": { 00:20:52.696 "trtype": "TCP", 00:20:52.696 "adrfam": "IPv4", 00:20:52.696 "traddr": "10.0.0.2", 00:20:52.696 "trsvcid": "4420" 00:20:52.696 }, 00:20:52.696 "peer_address": { 00:20:52.696 "trtype": "TCP", 00:20:52.696 "adrfam": "IPv4", 00:20:52.696 "traddr": "10.0.0.1", 00:20:52.696 "trsvcid": "56364" 00:20:52.696 }, 00:20:52.696 "auth": { 00:20:52.696 "state": "completed", 00:20:52.697 "digest": "sha384", 00:20:52.697 "dhgroup": "ffdhe3072" 00:20:52.697 } 00:20:52.697 } 00:20:52.697 ]' 00:20:52.697 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.954 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.954 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.954 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.954 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.954 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.954 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.954 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.210 03:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:20:54.142 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.142 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.142 03:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.143 03:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.143 03:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.143 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.143 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.143 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.143 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.707 03:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.965 00:20:54.965 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.965 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.965 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.221 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.221 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.221 03:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.221 03:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.221 03:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.221 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.221 { 00:20:55.221 "cntlid": 73, 00:20:55.221 "qid": 0, 00:20:55.221 "state": "enabled", 00:20:55.221 "listen_address": { 00:20:55.221 "trtype": "TCP", 00:20:55.221 "adrfam": "IPv4", 00:20:55.221 "traddr": "10.0.0.2", 00:20:55.221 "trsvcid": "4420" 00:20:55.221 }, 00:20:55.221 "peer_address": { 00:20:55.221 "trtype": "TCP", 00:20:55.221 "adrfam": "IPv4", 00:20:55.221 "traddr": "10.0.0.1", 00:20:55.221 "trsvcid": "56380" 00:20:55.221 }, 00:20:55.221 "auth": { 00:20:55.221 "state": "completed", 00:20:55.221 "digest": "sha384", 00:20:55.221 "dhgroup": "ffdhe4096" 00:20:55.221 } 00:20:55.221 } 00:20:55.221 ]' 00:20:55.221 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.222 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.222 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.222 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.222 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.222 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.222 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.222 03:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.479 03:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.855 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.856 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.113 00:20:57.113 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.113 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.113 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.370 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.370 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.370 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.370 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.370 03:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.370 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.370 { 00:20:57.370 "cntlid": 75, 00:20:57.370 "qid": 0, 00:20:57.370 "state": "enabled", 00:20:57.370 "listen_address": { 00:20:57.370 "trtype": "TCP", 00:20:57.370 "adrfam": "IPv4", 00:20:57.370 "traddr": "10.0.0.2", 00:20:57.370 "trsvcid": "4420" 00:20:57.370 }, 00:20:57.370 "peer_address": { 00:20:57.370 "trtype": "TCP", 00:20:57.370 "adrfam": "IPv4", 00:20:57.370 "traddr": "10.0.0.1", 00:20:57.370 "trsvcid": "56406" 00:20:57.370 }, 00:20:57.370 "auth": { 00:20:57.370 "state": "completed", 00:20:57.370 "digest": "sha384", 00:20:57.370 "dhgroup": "ffdhe4096" 00:20:57.370 } 00:20:57.370 } 00:20:57.370 ]' 00:20:57.370 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.628 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.628 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.628 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.628 03:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.628 03:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.628 03:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.628 03:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.886 03:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:20:58.819 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.820 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.820 03:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.820 03:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.820 03:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.820 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.820 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.820 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.077 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:59.077 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.078 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.336 00:20:59.336 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.336 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.336 03:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.614 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.614 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.614 03:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.614 03:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.888 03:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.888 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.888 { 00:20:59.888 "cntlid": 77, 00:20:59.888 "qid": 0, 00:20:59.888 "state": "enabled", 00:20:59.888 "listen_address": { 00:20:59.888 "trtype": "TCP", 00:20:59.888 "adrfam": "IPv4", 00:20:59.888 "traddr": "10.0.0.2", 00:20:59.888 "trsvcid": "4420" 00:20:59.888 }, 00:20:59.888 "peer_address": { 00:20:59.888 "trtype": "TCP", 00:20:59.888 "adrfam": "IPv4", 00:20:59.888 "traddr": "10.0.0.1", 00:20:59.888 "trsvcid": "56440" 00:20:59.888 }, 00:20:59.888 "auth": { 00:20:59.888 "state": "completed", 00:20:59.888 "digest": "sha384", 00:20:59.888 "dhgroup": "ffdhe4096" 00:20:59.888 } 00:20:59.888 } 00:20:59.888 ]' 00:20:59.888 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.888 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.888 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.889 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.889 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.889 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.889 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.889 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.147 03:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:21:01.080 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.080 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.080 03:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.080 03:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.080 03:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.080 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.080 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.080 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.337 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.338 03:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.595 00:21:01.853 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.853 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.853 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.853 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.853 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.853 03:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.853 03:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.111 { 00:21:02.111 "cntlid": 79, 00:21:02.111 "qid": 0, 00:21:02.111 "state": "enabled", 00:21:02.111 "listen_address": { 00:21:02.111 "trtype": "TCP", 00:21:02.111 "adrfam": "IPv4", 00:21:02.111 "traddr": "10.0.0.2", 00:21:02.111 "trsvcid": "4420" 00:21:02.111 }, 00:21:02.111 "peer_address": { 00:21:02.111 "trtype": "TCP", 00:21:02.111 "adrfam": "IPv4", 00:21:02.111 "traddr": "10.0.0.1", 00:21:02.111 "trsvcid": "39052" 00:21:02.111 }, 00:21:02.111 "auth": { 00:21:02.111 "state": "completed", 00:21:02.111 "digest": "sha384", 00:21:02.111 "dhgroup": "ffdhe4096" 00:21:02.111 } 00:21:02.111 } 00:21:02.111 ]' 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.111 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.370 03:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.303 03:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.561 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.127 00:21:04.127 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.127 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.127 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.385 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.385 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.385 03:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.385 03:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.385 03:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.385 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.385 { 00:21:04.385 "cntlid": 81, 00:21:04.385 "qid": 0, 00:21:04.385 "state": "enabled", 00:21:04.385 "listen_address": { 00:21:04.385 "trtype": "TCP", 00:21:04.385 "adrfam": "IPv4", 00:21:04.385 "traddr": "10.0.0.2", 00:21:04.385 "trsvcid": "4420" 00:21:04.385 }, 00:21:04.385 "peer_address": { 00:21:04.385 "trtype": "TCP", 00:21:04.385 "adrfam": "IPv4", 00:21:04.385 "traddr": "10.0.0.1", 00:21:04.385 "trsvcid": "39084" 00:21:04.385 }, 00:21:04.385 "auth": { 00:21:04.385 "state": "completed", 00:21:04.385 "digest": "sha384", 00:21:04.385 "dhgroup": "ffdhe6144" 00:21:04.385 } 00:21:04.385 } 00:21:04.385 ]' 00:21:04.385 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.642 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.642 03:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.642 03:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.642 03:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.642 03:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.642 03:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.642 03:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.899 03:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:21:05.833 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.833 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.833 03:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.833 03:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.833 03:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.833 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.833 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.833 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.091 03:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.657 00:21:06.657 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.657 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.657 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.915 { 00:21:06.915 "cntlid": 83, 00:21:06.915 "qid": 0, 00:21:06.915 "state": "enabled", 00:21:06.915 "listen_address": { 00:21:06.915 "trtype": "TCP", 00:21:06.915 "adrfam": "IPv4", 00:21:06.915 "traddr": "10.0.0.2", 00:21:06.915 "trsvcid": "4420" 00:21:06.915 }, 00:21:06.915 "peer_address": { 00:21:06.915 "trtype": "TCP", 00:21:06.915 "adrfam": "IPv4", 00:21:06.915 "traddr": "10.0.0.1", 00:21:06.915 "trsvcid": "39104" 00:21:06.915 }, 00:21:06.915 "auth": { 00:21:06.915 "state": "completed", 00:21:06.915 "digest": "sha384", 00:21:06.915 "dhgroup": "ffdhe6144" 00:21:06.915 } 00:21:06.915 } 00:21:06.915 ]' 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.915 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.173 03:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:21:08.107 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.365 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.365 03:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.365 03:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.365 03:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.365 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.365 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.365 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.623 03:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.188 00:21:09.188 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.188 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.188 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.188 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.188 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.188 03:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.188 03:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.447 { 00:21:09.447 "cntlid": 85, 00:21:09.447 "qid": 0, 00:21:09.447 "state": "enabled", 00:21:09.447 "listen_address": { 00:21:09.447 "trtype": "TCP", 00:21:09.447 "adrfam": "IPv4", 00:21:09.447 "traddr": "10.0.0.2", 00:21:09.447 "trsvcid": "4420" 00:21:09.447 }, 00:21:09.447 "peer_address": { 00:21:09.447 "trtype": "TCP", 00:21:09.447 "adrfam": "IPv4", 00:21:09.447 "traddr": "10.0.0.1", 00:21:09.447 "trsvcid": "39122" 00:21:09.447 }, 00:21:09.447 "auth": { 00:21:09.447 "state": "completed", 00:21:09.447 "digest": "sha384", 00:21:09.447 "dhgroup": "ffdhe6144" 00:21:09.447 } 00:21:09.447 } 00:21:09.447 ]' 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.447 03:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.705 03:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:21:10.638 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.638 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.639 03:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.639 03:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.639 03:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.639 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.639 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.639 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.897 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.463 00:21:11.463 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.463 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.463 03:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.721 { 00:21:11.721 "cntlid": 87, 00:21:11.721 "qid": 0, 00:21:11.721 "state": "enabled", 00:21:11.721 "listen_address": { 00:21:11.721 "trtype": "TCP", 00:21:11.721 "adrfam": "IPv4", 00:21:11.721 "traddr": "10.0.0.2", 00:21:11.721 "trsvcid": "4420" 00:21:11.721 }, 00:21:11.721 "peer_address": { 00:21:11.721 "trtype": "TCP", 00:21:11.721 "adrfam": "IPv4", 00:21:11.721 "traddr": "10.0.0.1", 00:21:11.721 "trsvcid": "51028" 00:21:11.721 }, 00:21:11.721 "auth": { 00:21:11.721 "state": "completed", 00:21:11.721 "digest": "sha384", 00:21:11.721 "dhgroup": "ffdhe6144" 00:21:11.721 } 00:21:11.721 } 00:21:11.721 ]' 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.721 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.979 03:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:13.353 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.354 03:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.288 00:21:14.288 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.288 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.288 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.546 { 00:21:14.546 "cntlid": 89, 00:21:14.546 "qid": 0, 00:21:14.546 "state": "enabled", 00:21:14.546 "listen_address": { 00:21:14.546 "trtype": "TCP", 00:21:14.546 "adrfam": "IPv4", 00:21:14.546 "traddr": "10.0.0.2", 00:21:14.546 "trsvcid": "4420" 00:21:14.546 }, 00:21:14.546 "peer_address": { 00:21:14.546 "trtype": "TCP", 00:21:14.546 "adrfam": "IPv4", 00:21:14.546 "traddr": "10.0.0.1", 00:21:14.546 "trsvcid": "51056" 00:21:14.546 }, 00:21:14.546 "auth": { 00:21:14.546 "state": "completed", 00:21:14.546 "digest": "sha384", 00:21:14.546 "dhgroup": "ffdhe8192" 00:21:14.546 } 00:21:14.546 } 00:21:14.546 ]' 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.546 03:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.546 03:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.546 03:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.546 03:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.546 03:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.546 03:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.806 03:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:21:15.770 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.770 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.770 03:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.770 03:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.770 03:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.770 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.770 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.770 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.028 03:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.960 00:21:16.960 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.960 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.960 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.217 { 00:21:17.217 "cntlid": 91, 00:21:17.217 "qid": 0, 00:21:17.217 "state": "enabled", 00:21:17.217 "listen_address": { 00:21:17.217 "trtype": "TCP", 00:21:17.217 "adrfam": "IPv4", 00:21:17.217 "traddr": "10.0.0.2", 00:21:17.217 "trsvcid": "4420" 00:21:17.217 }, 00:21:17.217 "peer_address": { 00:21:17.217 "trtype": "TCP", 00:21:17.217 "adrfam": "IPv4", 00:21:17.217 "traddr": "10.0.0.1", 00:21:17.217 "trsvcid": "51100" 00:21:17.217 }, 00:21:17.217 "auth": { 00:21:17.217 "state": "completed", 00:21:17.217 "digest": "sha384", 00:21:17.217 "dhgroup": "ffdhe8192" 00:21:17.217 } 00:21:17.217 } 00:21:17.217 ]' 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.217 03:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.474 03:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:21:18.407 03:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.407 03:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.407 03:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.407 03:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.407 03:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.407 03:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.407 03:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.407 03:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.973 03:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.906 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.906 { 00:21:19.906 "cntlid": 93, 00:21:19.906 "qid": 0, 00:21:19.906 "state": "enabled", 00:21:19.906 "listen_address": { 00:21:19.906 "trtype": "TCP", 00:21:19.906 "adrfam": "IPv4", 00:21:19.906 "traddr": "10.0.0.2", 00:21:19.906 "trsvcid": "4420" 00:21:19.906 }, 00:21:19.906 "peer_address": { 00:21:19.906 "trtype": "TCP", 00:21:19.906 "adrfam": "IPv4", 00:21:19.906 "traddr": "10.0.0.1", 00:21:19.906 "trsvcid": "51130" 00:21:19.906 }, 00:21:19.906 "auth": { 00:21:19.906 "state": "completed", 00:21:19.906 "digest": "sha384", 00:21:19.906 "dhgroup": "ffdhe8192" 00:21:19.906 } 00:21:19.906 } 00:21:19.906 ]' 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.906 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.164 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.164 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.164 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.164 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.164 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.421 03:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:21:21.353 03:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.353 03:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.353 03:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.353 03:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.353 03:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.353 03:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.353 03:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.353 03:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.611 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.542 00:21:22.542 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.542 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.542 03:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.800 { 00:21:22.800 "cntlid": 95, 00:21:22.800 "qid": 0, 00:21:22.800 "state": "enabled", 00:21:22.800 "listen_address": { 00:21:22.800 "trtype": "TCP", 00:21:22.800 "adrfam": "IPv4", 00:21:22.800 "traddr": "10.0.0.2", 00:21:22.800 "trsvcid": "4420" 00:21:22.800 }, 00:21:22.800 "peer_address": { 00:21:22.800 "trtype": "TCP", 00:21:22.800 "adrfam": "IPv4", 00:21:22.800 "traddr": "10.0.0.1", 00:21:22.800 "trsvcid": "37324" 00:21:22.800 }, 00:21:22.800 "auth": { 00:21:22.800 "state": "completed", 00:21:22.800 "digest": "sha384", 00:21:22.800 "dhgroup": "ffdhe8192" 00:21:22.800 } 00:21:22.800 } 00:21:22.800 ]' 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.800 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.801 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.801 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.059 03:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.006 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.264 03:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.522 00:21:24.522 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.522 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.522 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.780 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.780 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.780 03:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.780 03:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.780 03:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.780 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.780 { 00:21:24.780 "cntlid": 97, 00:21:24.780 "qid": 0, 00:21:24.780 "state": "enabled", 00:21:24.780 "listen_address": { 00:21:24.780 "trtype": "TCP", 00:21:24.780 "adrfam": "IPv4", 00:21:24.780 "traddr": "10.0.0.2", 00:21:24.780 "trsvcid": "4420" 00:21:24.780 }, 00:21:24.780 "peer_address": { 00:21:24.780 "trtype": "TCP", 00:21:24.780 "adrfam": "IPv4", 00:21:24.780 "traddr": "10.0.0.1", 00:21:24.780 "trsvcid": "37332" 00:21:24.780 }, 00:21:24.780 "auth": { 00:21:24.780 "state": "completed", 00:21:24.780 "digest": "sha512", 00:21:24.780 "dhgroup": "null" 00:21:24.780 } 00:21:24.780 } 00:21:24.780 ]' 00:21:24.780 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.780 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.038 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.038 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:25.038 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.038 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.038 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.038 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.296 03:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:21:26.231 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.231 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.231 03:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.231 03:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.231 03:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.231 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.231 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.231 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.490 03:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.749 00:21:26.749 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.749 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.749 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.006 { 00:21:27.006 "cntlid": 99, 00:21:27.006 "qid": 0, 00:21:27.006 "state": "enabled", 00:21:27.006 "listen_address": { 00:21:27.006 "trtype": "TCP", 00:21:27.006 "adrfam": "IPv4", 00:21:27.006 "traddr": "10.0.0.2", 00:21:27.006 "trsvcid": "4420" 00:21:27.006 }, 00:21:27.006 "peer_address": { 00:21:27.006 "trtype": "TCP", 00:21:27.006 "adrfam": "IPv4", 00:21:27.006 "traddr": "10.0.0.1", 00:21:27.006 "trsvcid": "37364" 00:21:27.006 }, 00:21:27.006 "auth": { 00:21:27.006 "state": "completed", 00:21:27.006 "digest": "sha512", 00:21:27.006 "dhgroup": "null" 00:21:27.006 } 00:21:27.006 } 00:21:27.006 ]' 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.006 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.263 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:27.263 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.263 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.263 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.263 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.521 03:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:21:28.454 03:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.454 03:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.454 03:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.454 03:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.454 03:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.454 03:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.454 03:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.454 03:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.712 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.970 00:21:28.970 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.970 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.970 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.229 { 00:21:29.229 "cntlid": 101, 00:21:29.229 "qid": 0, 00:21:29.229 "state": "enabled", 00:21:29.229 "listen_address": { 00:21:29.229 "trtype": "TCP", 00:21:29.229 "adrfam": "IPv4", 00:21:29.229 "traddr": "10.0.0.2", 00:21:29.229 "trsvcid": "4420" 00:21:29.229 }, 00:21:29.229 "peer_address": { 00:21:29.229 "trtype": "TCP", 00:21:29.229 "adrfam": "IPv4", 00:21:29.229 "traddr": "10.0.0.1", 00:21:29.229 "trsvcid": "37392" 00:21:29.229 }, 00:21:29.229 "auth": { 00:21:29.229 "state": "completed", 00:21:29.229 "digest": "sha512", 00:21:29.229 "dhgroup": "null" 00:21:29.229 } 00:21:29.229 } 00:21:29.229 ]' 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:29.229 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.487 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.487 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.487 03:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.487 03:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.913 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.914 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.172 00:21:31.172 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.172 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.172 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.430 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.430 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.430 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 03:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.430 { 00:21:31.430 "cntlid": 103, 00:21:31.430 "qid": 0, 00:21:31.430 "state": "enabled", 00:21:31.430 "listen_address": { 00:21:31.430 "trtype": "TCP", 00:21:31.430 "adrfam": "IPv4", 00:21:31.430 "traddr": "10.0.0.2", 00:21:31.430 "trsvcid": "4420" 00:21:31.430 }, 00:21:31.430 "peer_address": { 00:21:31.430 "trtype": "TCP", 00:21:31.430 "adrfam": "IPv4", 00:21:31.430 "traddr": "10.0.0.1", 00:21:31.430 "trsvcid": "53376" 00:21:31.430 }, 00:21:31.430 "auth": { 00:21:31.430 "state": "completed", 00:21:31.430 "digest": "sha512", 00:21:31.430 "dhgroup": "null" 00:21:31.430 } 00:21:31.430 } 00:21:31.430 ]' 00:21:31.430 03:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.688 03:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.688 03:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.688 03:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:31.688 03:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.688 03:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.688 03:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.688 03:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.946 03:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.880 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.138 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.396 00:21:33.396 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.396 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.396 03:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.655 { 00:21:33.655 "cntlid": 105, 00:21:33.655 "qid": 0, 00:21:33.655 "state": "enabled", 00:21:33.655 "listen_address": { 00:21:33.655 "trtype": "TCP", 00:21:33.655 "adrfam": "IPv4", 00:21:33.655 "traddr": "10.0.0.2", 00:21:33.655 "trsvcid": "4420" 00:21:33.655 }, 00:21:33.655 "peer_address": { 00:21:33.655 "trtype": "TCP", 00:21:33.655 "adrfam": "IPv4", 00:21:33.655 "traddr": "10.0.0.1", 00:21:33.655 "trsvcid": "53408" 00:21:33.655 }, 00:21:33.655 "auth": { 00:21:33.655 "state": "completed", 00:21:33.655 "digest": "sha512", 00:21:33.655 "dhgroup": "ffdhe2048" 00:21:33.655 } 00:21:33.655 } 00:21:33.655 ]' 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.655 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.913 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.913 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.913 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.913 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.913 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.172 03:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:21:35.106 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.106 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.106 03:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.106 03:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.106 03:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.106 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.106 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.106 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.363 03:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.622 00:21:35.622 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.622 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.622 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.880 { 00:21:35.880 "cntlid": 107, 00:21:35.880 "qid": 0, 00:21:35.880 "state": "enabled", 00:21:35.880 "listen_address": { 00:21:35.880 "trtype": "TCP", 00:21:35.880 "adrfam": "IPv4", 00:21:35.880 "traddr": "10.0.0.2", 00:21:35.880 "trsvcid": "4420" 00:21:35.880 }, 00:21:35.880 "peer_address": { 00:21:35.880 "trtype": "TCP", 00:21:35.880 "adrfam": "IPv4", 00:21:35.880 "traddr": "10.0.0.1", 00:21:35.880 "trsvcid": "53446" 00:21:35.880 }, 00:21:35.880 "auth": { 00:21:35.880 "state": "completed", 00:21:35.880 "digest": "sha512", 00:21:35.880 "dhgroup": "ffdhe2048" 00:21:35.880 } 00:21:35.880 } 00:21:35.880 ]' 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.880 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:36.138 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.139 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.139 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.139 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.397 03:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:21:37.329 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.329 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.329 03:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.329 03:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.329 03:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.329 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.329 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.329 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.586 03:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.587 03:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.587 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.587 03:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.844 00:21:37.844 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.844 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.844 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.102 { 00:21:38.102 "cntlid": 109, 00:21:38.102 "qid": 0, 00:21:38.102 "state": "enabled", 00:21:38.102 "listen_address": { 00:21:38.102 "trtype": "TCP", 00:21:38.102 "adrfam": "IPv4", 00:21:38.102 "traddr": "10.0.0.2", 00:21:38.102 "trsvcid": "4420" 00:21:38.102 }, 00:21:38.102 "peer_address": { 00:21:38.102 "trtype": "TCP", 00:21:38.102 "adrfam": "IPv4", 00:21:38.102 "traddr": "10.0.0.1", 00:21:38.102 "trsvcid": "53464" 00:21:38.102 }, 00:21:38.102 "auth": { 00:21:38.102 "state": "completed", 00:21:38.102 "digest": "sha512", 00:21:38.102 "dhgroup": "ffdhe2048" 00:21:38.102 } 00:21:38.102 } 00:21:38.102 ]' 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.102 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.359 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.359 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.359 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.359 03:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:21:39.732 03:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.732 03:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.732 03:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.732 03:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.732 03:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.732 03:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.732 03:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.732 03:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.732 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.990 00:21:39.991 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.991 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.991 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.248 { 00:21:40.248 "cntlid": 111, 00:21:40.248 "qid": 0, 00:21:40.248 "state": "enabled", 00:21:40.248 "listen_address": { 00:21:40.248 "trtype": "TCP", 00:21:40.248 "adrfam": "IPv4", 00:21:40.248 "traddr": "10.0.0.2", 00:21:40.248 "trsvcid": "4420" 00:21:40.248 }, 00:21:40.248 "peer_address": { 00:21:40.248 "trtype": "TCP", 00:21:40.248 "adrfam": "IPv4", 00:21:40.248 "traddr": "10.0.0.1", 00:21:40.248 "trsvcid": "53494" 00:21:40.248 }, 00:21:40.248 "auth": { 00:21:40.248 "state": "completed", 00:21:40.248 "digest": "sha512", 00:21:40.248 "dhgroup": "ffdhe2048" 00:21:40.248 } 00:21:40.248 } 00:21:40.248 ]' 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.248 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.505 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.505 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.505 03:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.763 03:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.695 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.952 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.210 00:21:42.210 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.210 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.210 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.468 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.468 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.468 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.468 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.468 03:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.468 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.468 { 00:21:42.468 "cntlid": 113, 00:21:42.468 "qid": 0, 00:21:42.468 "state": "enabled", 00:21:42.468 "listen_address": { 00:21:42.468 "trtype": "TCP", 00:21:42.468 "adrfam": "IPv4", 00:21:42.468 "traddr": "10.0.0.2", 00:21:42.468 "trsvcid": "4420" 00:21:42.468 }, 00:21:42.468 "peer_address": { 00:21:42.468 "trtype": "TCP", 00:21:42.468 "adrfam": "IPv4", 00:21:42.468 "traddr": "10.0.0.1", 00:21:42.468 "trsvcid": "49688" 00:21:42.468 }, 00:21:42.468 "auth": { 00:21:42.468 "state": "completed", 00:21:42.468 "digest": "sha512", 00:21:42.468 "dhgroup": "ffdhe3072" 00:21:42.468 } 00:21:42.468 } 00:21:42.468 ]' 00:21:42.468 03:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.468 03:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.468 03:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.726 03:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:42.726 03:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.726 03:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.726 03:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.726 03:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.983 03:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:21:43.915 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.915 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.915 03:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.915 03:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.915 03:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.915 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.915 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.915 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.173 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.430 00:21:44.430 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.430 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.430 03:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.687 { 00:21:44.687 "cntlid": 115, 00:21:44.687 "qid": 0, 00:21:44.687 "state": "enabled", 00:21:44.687 "listen_address": { 00:21:44.687 "trtype": "TCP", 00:21:44.687 "adrfam": "IPv4", 00:21:44.687 "traddr": "10.0.0.2", 00:21:44.687 "trsvcid": "4420" 00:21:44.687 }, 00:21:44.687 "peer_address": { 00:21:44.687 "trtype": "TCP", 00:21:44.687 "adrfam": "IPv4", 00:21:44.687 "traddr": "10.0.0.1", 00:21:44.687 "trsvcid": "49718" 00:21:44.687 }, 00:21:44.687 "auth": { 00:21:44.687 "state": "completed", 00:21:44.687 "digest": "sha512", 00:21:44.687 "dhgroup": "ffdhe3072" 00:21:44.687 } 00:21:44.687 } 00:21:44.687 ]' 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.687 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.945 03:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:21:45.878 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.878 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.878 03:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.878 03:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.135 03:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.135 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.135 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.135 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.393 03:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.651 00:21:46.651 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.651 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.651 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.908 { 00:21:46.908 "cntlid": 117, 00:21:46.908 "qid": 0, 00:21:46.908 "state": "enabled", 00:21:46.908 "listen_address": { 00:21:46.908 "trtype": "TCP", 00:21:46.908 "adrfam": "IPv4", 00:21:46.908 "traddr": "10.0.0.2", 00:21:46.908 "trsvcid": "4420" 00:21:46.908 }, 00:21:46.908 "peer_address": { 00:21:46.908 "trtype": "TCP", 00:21:46.908 "adrfam": "IPv4", 00:21:46.908 "traddr": "10.0.0.1", 00:21:46.908 "trsvcid": "49746" 00:21:46.908 }, 00:21:46.908 "auth": { 00:21:46.908 "state": "completed", 00:21:46.908 "digest": "sha512", 00:21:46.908 "dhgroup": "ffdhe3072" 00:21:46.908 } 00:21:46.908 } 00:21:46.908 ]' 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.908 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.167 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.167 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.167 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.167 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.167 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.424 03:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:21:48.357 03:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.357 03:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.357 03:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.357 03:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.357 03:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.357 03:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.357 03:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.357 03:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.615 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.181 00:21:49.181 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.181 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.181 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.438 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.438 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.439 { 00:21:49.439 "cntlid": 119, 00:21:49.439 "qid": 0, 00:21:49.439 "state": "enabled", 00:21:49.439 "listen_address": { 00:21:49.439 "trtype": "TCP", 00:21:49.439 "adrfam": "IPv4", 00:21:49.439 "traddr": "10.0.0.2", 00:21:49.439 "trsvcid": "4420" 00:21:49.439 }, 00:21:49.439 "peer_address": { 00:21:49.439 "trtype": "TCP", 00:21:49.439 "adrfam": "IPv4", 00:21:49.439 "traddr": "10.0.0.1", 00:21:49.439 "trsvcid": "49772" 00:21:49.439 }, 00:21:49.439 "auth": { 00:21:49.439 "state": "completed", 00:21:49.439 "digest": "sha512", 00:21:49.439 "dhgroup": "ffdhe3072" 00:21:49.439 } 00:21:49.439 } 00:21:49.439 ]' 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.439 03:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.696 03:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.629 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.887 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.452 00:21:51.452 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.452 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.452 03:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.710 { 00:21:51.710 "cntlid": 121, 00:21:51.710 "qid": 0, 00:21:51.710 "state": "enabled", 00:21:51.710 "listen_address": { 00:21:51.710 "trtype": "TCP", 00:21:51.710 "adrfam": "IPv4", 00:21:51.710 "traddr": "10.0.0.2", 00:21:51.710 "trsvcid": "4420" 00:21:51.710 }, 00:21:51.710 "peer_address": { 00:21:51.710 "trtype": "TCP", 00:21:51.710 "adrfam": "IPv4", 00:21:51.710 "traddr": "10.0.0.1", 00:21:51.710 "trsvcid": "54312" 00:21:51.710 }, 00:21:51.710 "auth": { 00:21:51.710 "state": "completed", 00:21:51.710 "digest": "sha512", 00:21:51.710 "dhgroup": "ffdhe4096" 00:21:51.710 } 00:21:51.710 } 00:21:51.710 ]' 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.710 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.970 03:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:21:52.932 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.932 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.932 03:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.932 03:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.932 03:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.932 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.932 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.932 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.498 03:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.757 00:21:53.757 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.757 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.757 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.015 { 00:21:54.015 "cntlid": 123, 00:21:54.015 "qid": 0, 00:21:54.015 "state": "enabled", 00:21:54.015 "listen_address": { 00:21:54.015 "trtype": "TCP", 00:21:54.015 "adrfam": "IPv4", 00:21:54.015 "traddr": "10.0.0.2", 00:21:54.015 "trsvcid": "4420" 00:21:54.015 }, 00:21:54.015 "peer_address": { 00:21:54.015 "trtype": "TCP", 00:21:54.015 "adrfam": "IPv4", 00:21:54.015 "traddr": "10.0.0.1", 00:21:54.015 "trsvcid": "54340" 00:21:54.015 }, 00:21:54.015 "auth": { 00:21:54.015 "state": "completed", 00:21:54.015 "digest": "sha512", 00:21:54.015 "dhgroup": "ffdhe4096" 00:21:54.015 } 00:21:54.015 } 00:21:54.015 ]' 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.015 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.274 03:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:21:55.208 03:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.208 03:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.208 03:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.208 03:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.208 03:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.208 03:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.208 03:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.208 03:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.775 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.033 00:21:56.033 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.033 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.033 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.291 { 00:21:56.291 "cntlid": 125, 00:21:56.291 "qid": 0, 00:21:56.291 "state": "enabled", 00:21:56.291 "listen_address": { 00:21:56.291 "trtype": "TCP", 00:21:56.291 "adrfam": "IPv4", 00:21:56.291 "traddr": "10.0.0.2", 00:21:56.291 "trsvcid": "4420" 00:21:56.291 }, 00:21:56.291 "peer_address": { 00:21:56.291 "trtype": "TCP", 00:21:56.291 "adrfam": "IPv4", 00:21:56.291 "traddr": "10.0.0.1", 00:21:56.291 "trsvcid": "54376" 00:21:56.291 }, 00:21:56.291 "auth": { 00:21:56.291 "state": "completed", 00:21:56.291 "digest": "sha512", 00:21:56.291 "dhgroup": "ffdhe4096" 00:21:56.291 } 00:21:56.291 } 00:21:56.291 ]' 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.291 03:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.857 03:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:21:57.789 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.789 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.789 03:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.789 03:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.789 03:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.789 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.789 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.789 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.047 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.305 00:21:58.305 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.305 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.305 03:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.563 { 00:21:58.563 "cntlid": 127, 00:21:58.563 "qid": 0, 00:21:58.563 "state": "enabled", 00:21:58.563 "listen_address": { 00:21:58.563 "trtype": "TCP", 00:21:58.563 "adrfam": "IPv4", 00:21:58.563 "traddr": "10.0.0.2", 00:21:58.563 "trsvcid": "4420" 00:21:58.563 }, 00:21:58.563 "peer_address": { 00:21:58.563 "trtype": "TCP", 00:21:58.563 "adrfam": "IPv4", 00:21:58.563 "traddr": "10.0.0.1", 00:21:58.563 "trsvcid": "54402" 00:21:58.563 }, 00:21:58.563 "auth": { 00:21:58.563 "state": "completed", 00:21:58.563 "digest": "sha512", 00:21:58.563 "dhgroup": "ffdhe4096" 00:21:58.563 } 00:21:58.563 } 00:21:58.563 ]' 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.563 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.821 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.821 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.821 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.079 03:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.013 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.270 03:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.834 00:22:00.834 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.834 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.834 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.834 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.834 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.834 03:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.834 03:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.091 { 00:22:01.091 "cntlid": 129, 00:22:01.091 "qid": 0, 00:22:01.091 "state": "enabled", 00:22:01.091 "listen_address": { 00:22:01.091 "trtype": "TCP", 00:22:01.091 "adrfam": "IPv4", 00:22:01.091 "traddr": "10.0.0.2", 00:22:01.091 "trsvcid": "4420" 00:22:01.091 }, 00:22:01.091 "peer_address": { 00:22:01.091 "trtype": "TCP", 00:22:01.091 "adrfam": "IPv4", 00:22:01.091 "traddr": "10.0.0.1", 00:22:01.091 "trsvcid": "46674" 00:22:01.091 }, 00:22:01.091 "auth": { 00:22:01.091 "state": "completed", 00:22:01.091 "digest": "sha512", 00:22:01.091 "dhgroup": "ffdhe6144" 00:22:01.091 } 00:22:01.091 } 00:22:01.091 ]' 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.091 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.348 03:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:22:02.282 03:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.282 03:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.282 03:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.282 03:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.282 03:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.282 03:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.282 03:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.282 03:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.540 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.105 00:22:03.105 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.105 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.105 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.363 { 00:22:03.363 "cntlid": 131, 00:22:03.363 "qid": 0, 00:22:03.363 "state": "enabled", 00:22:03.363 "listen_address": { 00:22:03.363 "trtype": "TCP", 00:22:03.363 "adrfam": "IPv4", 00:22:03.363 "traddr": "10.0.0.2", 00:22:03.363 "trsvcid": "4420" 00:22:03.363 }, 00:22:03.363 "peer_address": { 00:22:03.363 "trtype": "TCP", 00:22:03.363 "adrfam": "IPv4", 00:22:03.363 "traddr": "10.0.0.1", 00:22:03.363 "trsvcid": "46704" 00:22:03.363 }, 00:22:03.363 "auth": { 00:22:03.363 "state": "completed", 00:22:03.363 "digest": "sha512", 00:22:03.363 "dhgroup": "ffdhe6144" 00:22:03.363 } 00:22:03.363 } 00:22:03.363 ]' 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.363 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.621 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.621 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.621 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.621 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.621 03:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.879 03:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:22:04.813 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.813 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.813 03:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.813 03:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.813 03:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.813 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.813 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.813 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.071 03:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.638 00:22:05.638 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.638 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.638 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.896 { 00:22:05.896 "cntlid": 133, 00:22:05.896 "qid": 0, 00:22:05.896 "state": "enabled", 00:22:05.896 "listen_address": { 00:22:05.896 "trtype": "TCP", 00:22:05.896 "adrfam": "IPv4", 00:22:05.896 "traddr": "10.0.0.2", 00:22:05.896 "trsvcid": "4420" 00:22:05.896 }, 00:22:05.896 "peer_address": { 00:22:05.896 "trtype": "TCP", 00:22:05.896 "adrfam": "IPv4", 00:22:05.896 "traddr": "10.0.0.1", 00:22:05.896 "trsvcid": "46742" 00:22:05.896 }, 00:22:05.896 "auth": { 00:22:05.896 "state": "completed", 00:22:05.896 "digest": "sha512", 00:22:05.896 "dhgroup": "ffdhe6144" 00:22:05.896 } 00:22:05.896 } 00:22:05.896 ]' 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.896 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.155 03:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.529 03:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.095 00:22:08.095 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.095 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.095 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.354 { 00:22:08.354 "cntlid": 135, 00:22:08.354 "qid": 0, 00:22:08.354 "state": "enabled", 00:22:08.354 "listen_address": { 00:22:08.354 "trtype": "TCP", 00:22:08.354 "adrfam": "IPv4", 00:22:08.354 "traddr": "10.0.0.2", 00:22:08.354 "trsvcid": "4420" 00:22:08.354 }, 00:22:08.354 "peer_address": { 00:22:08.354 "trtype": "TCP", 00:22:08.354 "adrfam": "IPv4", 00:22:08.354 "traddr": "10.0.0.1", 00:22:08.354 "trsvcid": "46782" 00:22:08.354 }, 00:22:08.354 "auth": { 00:22:08.354 "state": "completed", 00:22:08.354 "digest": "sha512", 00:22:08.354 "dhgroup": "ffdhe6144" 00:22:08.354 } 00:22:08.354 } 00:22:08.354 ]' 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.354 03:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.613 03:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.988 03:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.922 00:22:10.922 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.922 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.922 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.180 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.180 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.180 03:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.180 03:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.180 03:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.180 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.180 { 00:22:11.180 "cntlid": 137, 00:22:11.180 "qid": 0, 00:22:11.180 "state": "enabled", 00:22:11.180 "listen_address": { 00:22:11.180 "trtype": "TCP", 00:22:11.180 "adrfam": "IPv4", 00:22:11.180 "traddr": "10.0.0.2", 00:22:11.180 "trsvcid": "4420" 00:22:11.180 }, 00:22:11.180 "peer_address": { 00:22:11.180 "trtype": "TCP", 00:22:11.180 "adrfam": "IPv4", 00:22:11.180 "traddr": "10.0.0.1", 00:22:11.180 "trsvcid": "50266" 00:22:11.180 }, 00:22:11.180 "auth": { 00:22:11.180 "state": "completed", 00:22:11.180 "digest": "sha512", 00:22:11.180 "dhgroup": "ffdhe8192" 00:22:11.180 } 00:22:11.180 } 00:22:11.180 ]' 00:22:11.180 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.438 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.438 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.438 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.438 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.438 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.438 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.438 03:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.696 03:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:22:12.630 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.630 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.630 03:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.630 03:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.630 03:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.630 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.630 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.630 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.888 03:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.823 00:22:13.823 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.823 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.823 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.081 { 00:22:14.081 "cntlid": 139, 00:22:14.081 "qid": 0, 00:22:14.081 "state": "enabled", 00:22:14.081 "listen_address": { 00:22:14.081 "trtype": "TCP", 00:22:14.081 "adrfam": "IPv4", 00:22:14.081 "traddr": "10.0.0.2", 00:22:14.081 "trsvcid": "4420" 00:22:14.081 }, 00:22:14.081 "peer_address": { 00:22:14.081 "trtype": "TCP", 00:22:14.081 "adrfam": "IPv4", 00:22:14.081 "traddr": "10.0.0.1", 00:22:14.081 "trsvcid": "50300" 00:22:14.081 }, 00:22:14.081 "auth": { 00:22:14.081 "state": "completed", 00:22:14.081 "digest": "sha512", 00:22:14.081 "dhgroup": "ffdhe8192" 00:22:14.081 } 00:22:14.081 } 00:22:14.081 ]' 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.081 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.340 03:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDY2MzEwZTlkN2RlYTgzYmJiODdjMmM2ZTViMDJkYTk59Q0e: --dhchap-ctrl-secret DHHC-1:02:ZDg5MTJhNzNmNTRjMGY1Mzk0NjA2MmUwYjczYjBlMDJmYzNjZGVjMzJhYjEyMmQ0KFx3rw==: 00:22:15.274 03:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.274 03:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.274 03:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.274 03:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.274 03:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.274 03:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.274 03:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.274 03:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.533 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.466 00:22:16.466 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.466 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.466 03:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.724 { 00:22:16.724 "cntlid": 141, 00:22:16.724 "qid": 0, 00:22:16.724 "state": "enabled", 00:22:16.724 "listen_address": { 00:22:16.724 "trtype": "TCP", 00:22:16.724 "adrfam": "IPv4", 00:22:16.724 "traddr": "10.0.0.2", 00:22:16.724 "trsvcid": "4420" 00:22:16.724 }, 00:22:16.724 "peer_address": { 00:22:16.724 "trtype": "TCP", 00:22:16.724 "adrfam": "IPv4", 00:22:16.724 "traddr": "10.0.0.1", 00:22:16.724 "trsvcid": "50322" 00:22:16.724 }, 00:22:16.724 "auth": { 00:22:16.724 "state": "completed", 00:22:16.724 "digest": "sha512", 00:22:16.724 "dhgroup": "ffdhe8192" 00:22:16.724 } 00:22:16.724 } 00:22:16.724 ]' 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.724 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.982 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.982 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.982 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.982 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.982 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.239 03:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZGNjYWYyNjg5NzMwNTQ1MDlkMmNiMzMyODkzYmZkNDE1MjRlNjc5Y2I4NDhhNGE0i/CmAA==: --dhchap-ctrl-secret DHHC-1:01:Y2YxNGI5NzQ3NTM2NTE3ODMzM2Y3NGI0MGI2N2U0OWEEnEbm: 00:22:18.172 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.172 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.172 03:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.172 03:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.172 03:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.172 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.172 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.172 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.430 03:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.364 00:22:19.364 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.364 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.364 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.364 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.364 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.364 03:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.364 03:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.622 03:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.622 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.622 { 00:22:19.622 "cntlid": 143, 00:22:19.622 "qid": 0, 00:22:19.622 "state": "enabled", 00:22:19.622 "listen_address": { 00:22:19.622 "trtype": "TCP", 00:22:19.622 "adrfam": "IPv4", 00:22:19.622 "traddr": "10.0.0.2", 00:22:19.622 "trsvcid": "4420" 00:22:19.622 }, 00:22:19.622 "peer_address": { 00:22:19.622 "trtype": "TCP", 00:22:19.622 "adrfam": "IPv4", 00:22:19.622 "traddr": "10.0.0.1", 00:22:19.622 "trsvcid": "50348" 00:22:19.622 }, 00:22:19.622 "auth": { 00:22:19.622 "state": "completed", 00:22:19.622 "digest": "sha512", 00:22:19.622 "dhgroup": "ffdhe8192" 00:22:19.622 } 00:22:19.622 } 00:22:19.622 ]' 00:22:19.622 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.622 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.622 03:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.622 03:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.622 03:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.622 03:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.622 03:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.622 03:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.881 03:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.815 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.073 03:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.007 00:22:22.007 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.007 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.007 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.266 { 00:22:22.266 "cntlid": 145, 00:22:22.266 "qid": 0, 00:22:22.266 "state": "enabled", 00:22:22.266 "listen_address": { 00:22:22.266 "trtype": "TCP", 00:22:22.266 "adrfam": "IPv4", 00:22:22.266 "traddr": "10.0.0.2", 00:22:22.266 "trsvcid": "4420" 00:22:22.266 }, 00:22:22.266 "peer_address": { 00:22:22.266 "trtype": "TCP", 00:22:22.266 "adrfam": "IPv4", 00:22:22.266 "traddr": "10.0.0.1", 00:22:22.266 "trsvcid": "51724" 00:22:22.266 }, 00:22:22.266 "auth": { 00:22:22.266 "state": "completed", 00:22:22.266 "digest": "sha512", 00:22:22.266 "dhgroup": "ffdhe8192" 00:22:22.266 } 00:22:22.266 } 00:22:22.266 ]' 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.266 03:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.524 03:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmIwZTQyZDUzNWNjZTk5Y2E1ZTEyYWQ1ZGQxMjRlMTBhYTk0MjVlNzI2NmNmOTA5Q2r0wg==: --dhchap-ctrl-secret DHHC-1:03:Mzc0Y2MwNWE4NWM5NGEzZWQwNGI1ZDQzOTQzOGMwZGZlZjM5ZGZmOGUzNzM2MGViNWMzMWQ3MTdhZjc1YTM2NfZCCqQ=: 00:22:23.457 03:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.457 03:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.457 03:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.457 03:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.457 03:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.458 03:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:23.458 03:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.458 03:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:23.458 03:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:24.391 request: 00:22:24.391 { 00:22:24.391 "name": "nvme0", 00:22:24.391 "trtype": "tcp", 00:22:24.391 "traddr": "10.0.0.2", 00:22:24.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.391 "adrfam": "ipv4", 00:22:24.391 "trsvcid": "4420", 00:22:24.391 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:24.391 "dhchap_key": "key2", 00:22:24.391 "method": "bdev_nvme_attach_controller", 00:22:24.391 "req_id": 1 00:22:24.391 } 00:22:24.391 Got JSON-RPC error response 00:22:24.391 response: 00:22:24.391 { 00:22:24.391 "code": -5, 00:22:24.391 "message": "Input/output error" 00:22:24.391 } 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.391 03:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:25.360 request: 00:22:25.360 { 00:22:25.360 "name": "nvme0", 00:22:25.360 "trtype": "tcp", 00:22:25.360 "traddr": "10.0.0.2", 00:22:25.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:25.360 "adrfam": "ipv4", 00:22:25.360 "trsvcid": "4420", 00:22:25.360 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.360 "dhchap_key": "key1", 00:22:25.360 "dhchap_ctrlr_key": "ckey2", 00:22:25.360 "method": "bdev_nvme_attach_controller", 00:22:25.360 "req_id": 1 00:22:25.360 } 00:22:25.360 Got JSON-RPC error response 00:22:25.360 response: 00:22:25.360 { 00:22:25.360 "code": -5, 00:22:25.360 "message": "Input/output error" 00:22:25.360 } 00:22:25.360 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:25.360 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:25.360 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:25.360 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:25.360 03:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.361 03:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.294 request: 00:22:26.294 { 00:22:26.294 "name": "nvme0", 00:22:26.294 "trtype": "tcp", 00:22:26.294 "traddr": "10.0.0.2", 00:22:26.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.294 "adrfam": "ipv4", 00:22:26.294 "trsvcid": "4420", 00:22:26.294 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.294 "dhchap_key": "key1", 00:22:26.294 "dhchap_ctrlr_key": "ckey1", 00:22:26.294 "method": "bdev_nvme_attach_controller", 00:22:26.294 "req_id": 1 00:22:26.294 } 00:22:26.294 Got JSON-RPC error response 00:22:26.294 response: 00:22:26.294 { 00:22:26.294 "code": -5, 00:22:26.294 "message": "Input/output error" 00:22:26.294 } 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 447685 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 447685 ']' 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 447685 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 447685 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 447685' 00:22:26.294 killing process with pid 447685 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 447685 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 447685 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=470093 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 470093 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 470093 ']' 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.294 03:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 470093 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 470093 ']' 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.553 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.810 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.810 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:26.810 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:26.810 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.810 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.068 03:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.000 00:22:28.000 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.000 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.000 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.257 { 00:22:28.257 "cntlid": 1, 00:22:28.257 "qid": 0, 00:22:28.257 "state": "enabled", 00:22:28.257 "listen_address": { 00:22:28.257 "trtype": "TCP", 00:22:28.257 "adrfam": "IPv4", 00:22:28.257 "traddr": "10.0.0.2", 00:22:28.257 "trsvcid": "4420" 00:22:28.257 }, 00:22:28.257 "peer_address": { 00:22:28.257 "trtype": "TCP", 00:22:28.257 "adrfam": "IPv4", 00:22:28.257 "traddr": "10.0.0.1", 00:22:28.257 "trsvcid": "51786" 00:22:28.257 }, 00:22:28.257 "auth": { 00:22:28.257 "state": "completed", 00:22:28.257 "digest": "sha512", 00:22:28.257 "dhgroup": "ffdhe8192" 00:22:28.257 } 00:22:28.257 } 00:22:28.257 ]' 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.257 03:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.822 03:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE1MzQ3ODMyMTdjYjNkM2M2MmQ4NTBjYWNkYTJlYTYwMWQxMjQ4ZTA5YTJkNWI3YzY2MDY3ODI0YWYwMWU2Y1IUmgk=: 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.755 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.013 request: 00:22:30.013 { 00:22:30.013 "name": "nvme0", 00:22:30.013 "trtype": "tcp", 00:22:30.013 "traddr": "10.0.0.2", 00:22:30.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.013 "adrfam": "ipv4", 00:22:30.013 "trsvcid": "4420", 00:22:30.013 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.013 "dhchap_key": "key3", 00:22:30.013 "method": "bdev_nvme_attach_controller", 00:22:30.013 "req_id": 1 00:22:30.013 } 00:22:30.013 Got JSON-RPC error response 00:22:30.013 response: 00:22:30.013 { 00:22:30.013 "code": -5, 00:22:30.013 "message": "Input/output error" 00:22:30.013 } 00:22:30.013 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:30.013 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:30.013 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:30.013 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:30.013 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:30.013 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:30.013 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:30.013 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.272 03:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.530 request: 00:22:30.530 { 00:22:30.530 "name": "nvme0", 00:22:30.530 "trtype": "tcp", 00:22:30.530 "traddr": "10.0.0.2", 00:22:30.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.530 "adrfam": "ipv4", 00:22:30.530 "trsvcid": "4420", 00:22:30.530 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.530 "dhchap_key": "key3", 00:22:30.530 "method": "bdev_nvme_attach_controller", 00:22:30.530 "req_id": 1 00:22:30.530 } 00:22:30.530 Got JSON-RPC error response 00:22:30.530 response: 00:22:30.530 { 00:22:30.530 "code": -5, 00:22:30.530 "message": "Input/output error" 00:22:30.530 } 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:30.530 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:30.789 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:31.047 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:31.305 request: 00:22:31.305 { 00:22:31.305 "name": "nvme0", 00:22:31.305 "trtype": "tcp", 00:22:31.305 "traddr": "10.0.0.2", 00:22:31.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:31.305 "adrfam": "ipv4", 00:22:31.305 "trsvcid": "4420", 00:22:31.305 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:31.305 "dhchap_key": "key0", 00:22:31.305 "dhchap_ctrlr_key": "key1", 00:22:31.305 "method": "bdev_nvme_attach_controller", 00:22:31.305 "req_id": 1 00:22:31.305 } 00:22:31.305 Got JSON-RPC error response 00:22:31.305 response: 00:22:31.305 { 00:22:31.305 "code": -5, 00:22:31.305 "message": "Input/output error" 00:22:31.305 } 00:22:31.305 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:31.305 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:31.305 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:31.305 03:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:31.305 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:31.305 03:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:31.563 00:22:31.563 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:31.563 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:31.563 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.820 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.820 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.820 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.078 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:32.078 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:32.078 03:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 447705 00:22:32.078 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 447705 ']' 00:22:32.078 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 447705 00:22:32.078 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:32.078 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.078 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 447705 00:22:32.336 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:32.336 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:32.336 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 447705' 00:22:32.336 killing process with pid 447705 00:22:32.337 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 447705 00:22:32.337 03:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 447705 00:22:32.594 03:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:32.594 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.594 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:32.594 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.594 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:32.594 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.594 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.595 rmmod nvme_tcp 00:22:32.595 rmmod nvme_fabrics 00:22:32.595 rmmod nvme_keyring 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 470093 ']' 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 470093 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 470093 ']' 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 470093 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 470093 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 470093' 00:22:32.595 killing process with pid 470093 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 470093 00:22:32.595 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 470093 00:22:32.854 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.854 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.854 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.854 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.854 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.854 03:21:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.854 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.854 03:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.389 03:22:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.389 03:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UNN /tmp/spdk.key-sha256.2On /tmp/spdk.key-sha384.NAG /tmp/spdk.key-sha512.JSR /tmp/spdk.key-sha512.xUH /tmp/spdk.key-sha384.ApS /tmp/spdk.key-sha256.UsF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:35.389 00:22:35.389 real 3m9.335s 00:22:35.389 user 7m20.491s 00:22:35.389 sys 0m24.976s 00:22:35.389 03:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:35.389 03:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.389 ************************************ 00:22:35.389 END TEST nvmf_auth_target 00:22:35.389 ************************************ 00:22:35.389 03:22:01 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:35.389 03:22:01 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:35.389 03:22:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:35.389 03:22:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:35.389 03:22:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:35.389 ************************************ 00:22:35.389 START TEST nvmf_bdevio_no_huge 00:22:35.389 ************************************ 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:35.389 * Looking for test storage... 00:22:35.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.389 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.390 03:22:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:37.289 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:37.290 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:37.290 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:37.290 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:37.290 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:37.290 00:22:37.290 --- 10.0.0.2 ping statistics --- 00:22:37.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.290 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:37.290 00:22:37.290 --- 10.0.0.1 ping statistics --- 00:22:37.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.290 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=472755 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 472755 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 472755 ']' 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.290 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:37.291 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.291 [2024-07-23 03:22:03.651199] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:37.291 [2024-07-23 03:22:03.651285] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:37.291 [2024-07-23 03:22:03.727077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.291 [2024-07-23 03:22:03.809637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.291 [2024-07-23 03:22:03.809709] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.291 [2024-07-23 03:22:03.809748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.291 [2024-07-23 03:22:03.809761] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.291 [2024-07-23 03:22:03.809770] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.291 [2024-07-23 03:22:03.809864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:37.291 [2024-07-23 03:22:03.809939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:37.291 [2024-07-23 03:22:03.809942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.291 [2024-07-23 03:22:03.809890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.549 [2024-07-23 03:22:03.930298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.549 Malloc0 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.549 [2024-07-23 03:22:03.967477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.549 { 00:22:37.549 "params": { 00:22:37.549 "name": "Nvme$subsystem", 00:22:37.549 "trtype": "$TEST_TRANSPORT", 00:22:37.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.549 "adrfam": "ipv4", 00:22:37.549 "trsvcid": "$NVMF_PORT", 00:22:37.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.549 "hdgst": ${hdgst:-false}, 00:22:37.549 "ddgst": ${ddgst:-false} 00:22:37.549 }, 00:22:37.549 "method": "bdev_nvme_attach_controller" 00:22:37.549 } 00:22:37.549 EOF 00:22:37.549 )") 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:37.549 03:22:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:37.549 "params": { 00:22:37.549 "name": "Nvme1", 00:22:37.549 "trtype": "tcp", 00:22:37.549 "traddr": "10.0.0.2", 00:22:37.549 "adrfam": "ipv4", 00:22:37.549 "trsvcid": "4420", 00:22:37.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.549 "hdgst": false, 00:22:37.549 "ddgst": false 00:22:37.549 }, 00:22:37.549 "method": "bdev_nvme_attach_controller" 00:22:37.549 }' 00:22:37.549 [2024-07-23 03:22:04.014487] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:37.549 [2024-07-23 03:22:04.014560] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid472885 ] 00:22:37.549 [2024-07-23 03:22:04.074888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:37.807 [2024-07-23 03:22:04.160725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.807 [2024-07-23 03:22:04.160773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.807 [2024-07-23 03:22:04.160777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.064 I/O targets: 00:22:38.064 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:38.064 00:22:38.064 00:22:38.064 CUnit - A unit testing framework for C - Version 2.1-3 00:22:38.064 http://cunit.sourceforge.net/ 00:22:38.064 00:22:38.064 00:22:38.064 Suite: bdevio tests on: Nvme1n1 00:22:38.064 Test: blockdev write read block ...passed 00:22:38.064 Test: blockdev write zeroes read block ...passed 00:22:38.064 Test: blockdev write zeroes read no split ...passed 00:22:38.064 Test: blockdev write zeroes read split ...passed 00:22:38.064 Test: blockdev write zeroes read split partial ...passed 00:22:38.064 Test: blockdev reset ...[2024-07-23 03:22:04.610253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.064 [2024-07-23 03:22:04.610365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbea00 (9): Bad file descriptor 00:22:38.322 [2024-07-23 03:22:04.713147] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:38.322 passed 00:22:38.322 Test: blockdev write read 8 blocks ...passed 00:22:38.322 Test: blockdev write read size > 128k ...passed 00:22:38.322 Test: blockdev write read invalid size ...passed 00:22:38.322 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:38.322 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:38.322 Test: blockdev write read max offset ...passed 00:22:38.322 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:38.322 Test: blockdev writev readv 8 blocks ...passed 00:22:38.322 Test: blockdev writev readv 30 x 1block ...passed 00:22:38.580 Test: blockdev writev readv block ...passed 00:22:38.580 Test: blockdev writev readv size > 128k ...passed 00:22:38.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:38.580 Test: blockdev comparev and writev ...[2024-07-23 03:22:04.930700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.580 [2024-07-23 03:22:04.930736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:04.930761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.580 [2024-07-23 03:22:04.930777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:04.931157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.580 [2024-07-23 03:22:04.931187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:04.931210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.580 [2024-07-23 03:22:04.931226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:04.931610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.580 [2024-07-23 03:22:04.931641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:04.931663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.580 [2024-07-23 03:22:04.931679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:04.932041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.580 [2024-07-23 03:22:04.932064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:04.932085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.580 [2024-07-23 03:22:04.932101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:38.580 passed 00:22:38.580 Test: blockdev nvme passthru rw ...passed 00:22:38.580 Test: blockdev nvme passthru vendor specific ...[2024-07-23 03:22:05.014978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.580 [2024-07-23 03:22:05.015004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:05.015221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.580 [2024-07-23 03:22:05.015243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:05.015462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.580 [2024-07-23 03:22:05.015484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:38.580 [2024-07-23 03:22:05.015698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.580 [2024-07-23 03:22:05.015722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:38.580 passed 00:22:38.580 Test: blockdev nvme admin passthru ...passed 00:22:38.580 Test: blockdev copy ...passed 00:22:38.580 00:22:38.580 Run Summary: Type Total Ran Passed Failed Inactive 00:22:38.580 suites 1 1 n/a 0 0 00:22:38.580 tests 23 23 23 0 0 00:22:38.580 asserts 152 152 152 0 n/a 00:22:38.580 00:22:38.580 Elapsed time = 1.185 seconds 00:22:38.839 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.839 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.839 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.097 rmmod nvme_tcp 00:22:39.097 rmmod nvme_fabrics 00:22:39.097 rmmod nvme_keyring 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 472755 ']' 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 472755 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 472755 ']' 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 472755 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 472755 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 472755' 00:22:39.097 killing process with pid 472755 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 472755 00:22:39.097 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 472755 00:22:39.355 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.355 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.355 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.355 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.355 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.355 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.355 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.355 03:22:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.885 03:22:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.885 00:22:41.885 real 0m6.469s 00:22:41.885 user 0m11.091s 00:22:41.885 sys 0m2.497s 00:22:41.885 03:22:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:41.885 03:22:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.885 ************************************ 00:22:41.885 END TEST nvmf_bdevio_no_huge 00:22:41.885 ************************************ 00:22:41.885 03:22:07 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:41.885 03:22:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:41.885 03:22:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:41.885 03:22:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.885 ************************************ 00:22:41.885 START TEST nvmf_tls 00:22:41.885 ************************************ 00:22:41.885 03:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:41.885 * Looking for test storage... 00:22:41.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.885 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.886 03:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:43.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:43.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:43.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:43.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:43.824 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:43.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:22:43.825 00:22:43.825 --- 10.0.0.2 ping statistics --- 00:22:43.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.825 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:22:43.825 00:22:43.825 --- 10.0.0.1 ping statistics --- 00:22:43.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.825 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=474957 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 474957 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 474957 ']' 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:43.825 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.825 [2024-07-23 03:22:10.273148] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:43.825 [2024-07-23 03:22:10.273234] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.825 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.825 [2024-07-23 03:22:10.338794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.083 [2024-07-23 03:22:10.425666] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.083 [2024-07-23 03:22:10.425715] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.083 [2024-07-23 03:22:10.425738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.083 [2024-07-23 03:22:10.425756] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.083 [2024-07-23 03:22:10.425773] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.083 [2024-07-23 03:22:10.425829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.083 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.083 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:44.083 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.083 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.083 03:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.083 03:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.083 03:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:44.083 03:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:44.341 true 00:22:44.341 03:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.341 03:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:44.599 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:44.599 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:44.599 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:44.857 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.857 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:45.115 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:45.115 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:45.115 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:45.388 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:45.388 03:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:45.646 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:45.646 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:45.646 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:45.646 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:45.904 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:45.904 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:45.904 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:46.162 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.162 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:46.420 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:46.420 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:46.420 03:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:46.678 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.678 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.RUANOxMY3K 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Inx2iN57bI 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.RUANOxMY3K 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Inx2iN57bI 00:22:46.935 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:47.192 03:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:47.755 03:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.RUANOxMY3K 00:22:47.756 03:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RUANOxMY3K 00:22:47.756 03:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:47.756 [2024-07-23 03:22:14.258733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.756 03:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.013 03:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:48.270 [2024-07-23 03:22:14.792164] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.270 [2024-07-23 03:22:14.792419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.270 03:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:48.529 malloc0 00:22:48.787 03:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:49.045 03:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RUANOxMY3K 00:22:49.045 [2024-07-23 03:22:15.593358] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:49.045 03:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RUANOxMY3K 00:22:49.303 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.272 Initializing NVMe Controllers 00:22:59.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.272 Initialization complete. Launching workers. 00:22:59.272 ======================================================== 00:22:59.272 Latency(us) 00:22:59.272 Device Information : IOPS MiB/s Average min max 00:22:59.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7737.07 30.22 8274.59 1142.11 10078.18 00:22:59.272 ======================================================== 00:22:59.272 Total : 7737.07 30.22 8274.59 1142.11 10078.18 00:22:59.272 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RUANOxMY3K 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RUANOxMY3K' 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=476848 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 476848 /var/tmp/bdevperf.sock 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 476848 ']' 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:59.272 03:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.272 [2024-07-23 03:22:25.769565] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:59.273 [2024-07-23 03:22:25.769654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476848 ] 00:22:59.273 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.273 [2024-07-23 03:22:25.829238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.530 [2024-07-23 03:22:25.916080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.530 03:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:59.530 03:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:59.530 03:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RUANOxMY3K 00:22:59.789 [2024-07-23 03:22:26.296175] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.789 [2024-07-23 03:22:26.296290] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:00.047 TLSTESTn1 00:23:00.047 03:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:00.047 Running I/O for 10 seconds... 00:23:10.012 00:23:10.012 Latency(us) 00:23:10.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.012 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.012 Verification LBA range: start 0x0 length 0x2000 00:23:10.012 TLSTESTn1 : 10.04 1257.43 4.91 0.00 0.00 101501.05 9514.86 100197.26 00:23:10.012 =================================================================================================================== 00:23:10.012 Total : 1257.43 4.91 0.00 0.00 101501.05 9514.86 100197.26 00:23:10.012 0 00:23:10.012 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.012 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 476848 00:23:10.012 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 476848 ']' 00:23:10.012 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 476848 00:23:10.012 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.012 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.012 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 476848 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 476848' 00:23:10.270 killing process with pid 476848 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 476848 00:23:10.270 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.270 00:23:10.270 Latency(us) 00:23:10.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.270 =================================================================================================================== 00:23:10.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.270 [2024-07-23 03:22:36.606296] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 476848 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Inx2iN57bI 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Inx2iN57bI 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Inx2iN57bI 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Inx2iN57bI' 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=478047 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 478047 /var/tmp/bdevperf.sock 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 478047 ']' 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.270 03:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.270 [2024-07-23 03:22:36.845542] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:10.270 [2024-07-23 03:22:36.845626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478047 ] 00:23:10.528 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.528 [2024-07-23 03:22:36.905317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.528 [2024-07-23 03:22:36.992874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.528 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:10.528 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:10.528 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Inx2iN57bI 00:23:10.786 [2024-07-23 03:22:37.313068] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.786 [2024-07-23 03:22:37.313183] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.786 [2024-07-23 03:22:37.324239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.786 [2024-07-23 03:22:37.325153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x657ed0 (107): Transport endpoint is not connected 00:23:10.786 [2024-07-23 03:22:37.326144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x657ed0 (9): Bad file descriptor 00:23:10.786 [2024-07-23 03:22:37.327144] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.786 [2024-07-23 03:22:37.327162] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:10.786 [2024-07-23 03:22:37.327179] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.786 request: 00:23:10.786 { 00:23:10.786 "name": "TLSTEST", 00:23:10.786 "trtype": "tcp", 00:23:10.786 "traddr": "10.0.0.2", 00:23:10.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.786 "adrfam": "ipv4", 00:23:10.786 "trsvcid": "4420", 00:23:10.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.786 "psk": "/tmp/tmp.Inx2iN57bI", 00:23:10.786 "method": "bdev_nvme_attach_controller", 00:23:10.786 "req_id": 1 00:23:10.786 } 00:23:10.786 Got JSON-RPC error response 00:23:10.786 response: 00:23:10.786 { 00:23:10.786 "code": -5, 00:23:10.786 "message": "Input/output error" 00:23:10.786 } 00:23:10.786 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 478047 00:23:10.786 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 478047 ']' 00:23:10.786 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 478047 00:23:10.786 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.786 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.786 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478047 00:23:11.043 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:11.043 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:11.043 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478047' 00:23:11.043 killing process with pid 478047 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 478047 00:23:11.044 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.044 00:23:11.044 Latency(us) 00:23:11.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.044 =================================================================================================================== 00:23:11.044 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.044 [2024-07-23 03:22:37.378912] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 478047 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RUANOxMY3K 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RUANOxMY3K 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RUANOxMY3K 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RUANOxMY3K' 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=478179 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 478179 /var/tmp/bdevperf.sock 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 478179 ']' 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.044 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.301 [2024-07-23 03:22:37.642808] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:11.301 [2024-07-23 03:22:37.642887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478179 ] 00:23:11.301 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.301 [2024-07-23 03:22:37.699863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.301 [2024-07-23 03:22:37.782232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.558 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:11.558 03:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:11.558 03:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.RUANOxMY3K 00:23:11.845 [2024-07-23 03:22:38.162100] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.845 [2024-07-23 03:22:38.162210] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:11.845 [2024-07-23 03:22:38.167416] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:11.845 [2024-07-23 03:22:38.167448] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:11.845 [2024-07-23 03:22:38.167500] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.845 [2024-07-23 03:22:38.168035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce6ed0 (107): Transport endpoint is not connected 00:23:11.845 [2024-07-23 03:22:38.169023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce6ed0 (9): Bad file descriptor 00:23:11.845 [2024-07-23 03:22:38.170023] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.845 [2024-07-23 03:22:38.170044] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.845 [2024-07-23 03:22:38.170062] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.845 request: 00:23:11.845 { 00:23:11.845 "name": "TLSTEST", 00:23:11.845 "trtype": "tcp", 00:23:11.845 "traddr": "10.0.0.2", 00:23:11.845 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.845 "adrfam": "ipv4", 00:23:11.845 "trsvcid": "4420", 00:23:11.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.845 "psk": "/tmp/tmp.RUANOxMY3K", 00:23:11.845 "method": "bdev_nvme_attach_controller", 00:23:11.845 "req_id": 1 00:23:11.845 } 00:23:11.845 Got JSON-RPC error response 00:23:11.845 response: 00:23:11.845 { 00:23:11.845 "code": -5, 00:23:11.845 "message": "Input/output error" 00:23:11.845 } 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 478179 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 478179 ']' 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 478179 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478179 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478179' 00:23:11.845 killing process with pid 478179 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 478179 00:23:11.845 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.845 00:23:11.845 Latency(us) 00:23:11.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.845 =================================================================================================================== 00:23:11.845 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.845 [2024-07-23 03:22:38.224019] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:11.845 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 478179 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RUANOxMY3K 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RUANOxMY3K 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RUANOxMY3K 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RUANOxMY3K' 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=478320 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.103 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 478320 /var/tmp/bdevperf.sock 00:23:12.104 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 478320 ']' 00:23:12.104 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.104 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.104 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.104 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.104 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.104 [2024-07-23 03:22:38.479403] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:12.104 [2024-07-23 03:22:38.479481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478320 ] 00:23:12.104 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.104 [2024-07-23 03:22:38.537463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.104 [2024-07-23 03:22:38.624122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.361 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.361 03:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.361 03:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RUANOxMY3K 00:23:12.619 [2024-07-23 03:22:38.999791] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.619 [2024-07-23 03:22:38.999911] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:12.619 [2024-07-23 03:22:39.008639] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:12.619 [2024-07-23 03:22:39.008670] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:12.619 [2024-07-23 03:22:39.008735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:12.619 [2024-07-23 03:22:39.008894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b2ed0 (107): Transport endpoint is not connected 00:23:12.619 [2024-07-23 03:22:39.009884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b2ed0 (9): Bad file descriptor 00:23:12.619 [2024-07-23 03:22:39.010885] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:12.619 [2024-07-23 03:22:39.010920] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:12.619 [2024-07-23 03:22:39.010938] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:12.619 request: 00:23:12.619 { 00:23:12.619 "name": "TLSTEST", 00:23:12.619 "trtype": "tcp", 00:23:12.619 "traddr": "10.0.0.2", 00:23:12.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.619 "adrfam": "ipv4", 00:23:12.619 "trsvcid": "4420", 00:23:12.619 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.619 "psk": "/tmp/tmp.RUANOxMY3K", 00:23:12.619 "method": "bdev_nvme_attach_controller", 00:23:12.619 "req_id": 1 00:23:12.619 } 00:23:12.619 Got JSON-RPC error response 00:23:12.619 response: 00:23:12.619 { 00:23:12.619 "code": -5, 00:23:12.619 "message": "Input/output error" 00:23:12.619 } 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 478320 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 478320 ']' 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 478320 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478320 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478320' 00:23:12.619 killing process with pid 478320 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 478320 00:23:12.619 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.619 00:23:12.619 Latency(us) 00:23:12.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.619 =================================================================================================================== 00:23:12.619 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.619 [2024-07-23 03:22:39.059373] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:12.619 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 478320 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=478454 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 478454 /var/tmp/bdevperf.sock 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 478454 ']' 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.877 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.877 [2024-07-23 03:22:39.312848] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:12.877 [2024-07-23 03:22:39.312928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478454 ] 00:23:12.877 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.877 [2024-07-23 03:22:39.369835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.877 [2024-07-23 03:22:39.450334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.135 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.135 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:13.135 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:13.394 [2024-07-23 03:22:39.780683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:13.394 [2024-07-23 03:22:39.782129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10685c0 (9): Bad file descriptor 00:23:13.394 [2024-07-23 03:22:39.783124] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.394 [2024-07-23 03:22:39.783144] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:13.394 [2024-07-23 03:22:39.783161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.394 request: 00:23:13.394 { 00:23:13.394 "name": "TLSTEST", 00:23:13.394 "trtype": "tcp", 00:23:13.394 "traddr": "10.0.0.2", 00:23:13.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.394 "adrfam": "ipv4", 00:23:13.394 "trsvcid": "4420", 00:23:13.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.394 "method": "bdev_nvme_attach_controller", 00:23:13.394 "req_id": 1 00:23:13.394 } 00:23:13.394 Got JSON-RPC error response 00:23:13.394 response: 00:23:13.394 { 00:23:13.394 "code": -5, 00:23:13.394 "message": "Input/output error" 00:23:13.394 } 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 478454 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 478454 ']' 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 478454 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478454 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478454' 00:23:13.394 killing process with pid 478454 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 478454 00:23:13.394 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.394 00:23:13.394 Latency(us) 00:23:13.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.394 =================================================================================================================== 00:23:13.394 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.394 03:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 478454 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 474957 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 474957 ']' 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 474957 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 474957 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 474957' 00:23:13.652 killing process with pid 474957 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 474957 00:23:13.652 [2024-07-23 03:22:40.081981] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:13.652 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 474957 00:23:13.910 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:13.910 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:13.910 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.910 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:13.910 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:13.910 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.okA4zgB3py 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.okA4zgB3py 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=478584 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 478584 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 478584 ']' 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:13.911 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.911 [2024-07-23 03:22:40.415756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:13.911 [2024-07-23 03:22:40.415833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.911 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.911 [2024-07-23 03:22:40.478644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.170 [2024-07-23 03:22:40.562098] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.170 [2024-07-23 03:22:40.562156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.170 [2024-07-23 03:22:40.562178] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.170 [2024-07-23 03:22:40.562204] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.170 [2024-07-23 03:22:40.562218] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.170 [2024-07-23 03:22:40.562252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.okA4zgB3py 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.okA4zgB3py 00:23:14.170 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.427 [2024-07-23 03:22:40.910095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.427 03:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.685 03:22:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.943 [2024-07-23 03:22:41.391363] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.943 [2024-07-23 03:22:41.391611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.943 03:22:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:15.201 malloc0 00:23:15.201 03:22:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:15.459 03:22:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.okA4zgB3py 00:23:15.717 [2024-07-23 03:22:42.137611] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.okA4zgB3py 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.okA4zgB3py' 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=478770 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 478770 /var/tmp/bdevperf.sock 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 478770 ']' 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.717 03:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.717 [2024-07-23 03:22:42.201661] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:15.717 [2024-07-23 03:22:42.201731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478770 ] 00:23:15.717 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.717 [2024-07-23 03:22:42.261234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.976 [2024-07-23 03:22:42.347947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.976 03:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:15.976 03:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:15.976 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.okA4zgB3py 00:23:16.234 [2024-07-23 03:22:42.699058] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.234 [2024-07-23 03:22:42.699175] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:16.234 TLSTESTn1 00:23:16.234 03:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:16.492 Running I/O for 10 seconds... 00:23:26.460 00:23:26.460 Latency(us) 00:23:26.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.460 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.460 Verification LBA range: start 0x0 length 0x2000 00:23:26.460 TLSTESTn1 : 10.05 2179.83 8.51 0.00 0.00 58556.22 9417.77 93206.76 00:23:26.460 =================================================================================================================== 00:23:26.460 Total : 2179.83 8.51 0.00 0.00 58556.22 9417.77 93206.76 00:23:26.460 0 00:23:26.460 03:22:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.460 03:22:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 478770 00:23:26.460 03:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 478770 ']' 00:23:26.460 03:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 478770 00:23:26.460 03:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:26.460 03:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:26.460 03:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478770 00:23:26.460 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:26.460 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:26.460 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478770' 00:23:26.460 killing process with pid 478770 00:23:26.460 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 478770 00:23:26.460 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.460 00:23:26.460 Latency(us) 00:23:26.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.460 =================================================================================================================== 00:23:26.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.460 [2024-07-23 03:22:53.016857] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:26.460 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 478770 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.okA4zgB3py 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.okA4zgB3py 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.okA4zgB3py 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.okA4zgB3py 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.okA4zgB3py' 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=480082 00:23:26.718 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.719 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.719 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 480082 /var/tmp/bdevperf.sock 00:23:26.719 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 480082 ']' 00:23:26.719 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.719 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:26.719 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.719 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:26.719 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.719 [2024-07-23 03:22:53.293372] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:26.719 [2024-07-23 03:22:53.293452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480082 ] 00:23:26.976 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.976 [2024-07-23 03:22:53.353159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.976 [2024-07-23 03:22:53.441009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.976 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:26.976 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:26.976 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.okA4zgB3py 00:23:27.541 [2024-07-23 03:22:53.824629] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.541 [2024-07-23 03:22:53.824709] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:27.541 [2024-07-23 03:22:53.824730] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.okA4zgB3py 00:23:27.541 request: 00:23:27.541 { 00:23:27.541 "name": "TLSTEST", 00:23:27.541 "trtype": "tcp", 00:23:27.541 "traddr": "10.0.0.2", 00:23:27.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.541 "adrfam": "ipv4", 00:23:27.541 "trsvcid": "4420", 00:23:27.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.541 "psk": "/tmp/tmp.okA4zgB3py", 00:23:27.541 "method": "bdev_nvme_attach_controller", 00:23:27.541 "req_id": 1 00:23:27.541 } 00:23:27.541 Got JSON-RPC error response 00:23:27.541 response: 00:23:27.541 { 00:23:27.541 "code": -1, 00:23:27.541 "message": "Operation not permitted" 00:23:27.541 } 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 480082 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 480082 ']' 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 480082 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 480082 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 480082' 00:23:27.541 killing process with pid 480082 00:23:27.541 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 480082 00:23:27.541 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.541 00:23:27.541 Latency(us) 00:23:27.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.542 =================================================================================================================== 00:23:27.542 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.542 03:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 480082 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 478584 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 478584 ']' 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 478584 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478584 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478584' 00:23:27.542 killing process with pid 478584 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 478584 00:23:27.542 [2024-07-23 03:22:54.116819] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:27.542 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 478584 00:23:27.802 03:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:27.802 03:22:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.802 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:27.802 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=480224 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 480224 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 480224 ']' 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.061 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.061 [2024-07-23 03:22:54.426579] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:28.061 [2024-07-23 03:22:54.426678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.061 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.061 [2024-07-23 03:22:54.494794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.061 [2024-07-23 03:22:54.583722] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.061 [2024-07-23 03:22:54.583788] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.061 [2024-07-23 03:22:54.583815] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.061 [2024-07-23 03:22:54.583838] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.061 [2024-07-23 03:22:54.583855] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.061 [2024-07-23 03:22:54.583905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.okA4zgB3py 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.okA4zgB3py 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.okA4zgB3py 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.okA4zgB3py 00:23:28.320 03:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:28.578 [2024-07-23 03:22:55.005669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.578 03:22:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:28.836 03:22:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.093 [2024-07-23 03:22:55.591254] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.093 [2024-07-23 03:22:55.591518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.093 03:22:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.351 malloc0 00:23:29.351 03:22:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:29.609 03:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.okA4zgB3py 00:23:29.868 [2024-07-23 03:22:56.369587] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:29.868 [2024-07-23 03:22:56.369644] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:29.868 [2024-07-23 03:22:56.369709] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:29.868 request: 00:23:29.868 { 00:23:29.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.868 "host": "nqn.2016-06.io.spdk:host1", 00:23:29.868 "psk": "/tmp/tmp.okA4zgB3py", 00:23:29.868 "method": "nvmf_subsystem_add_host", 00:23:29.868 "req_id": 1 00:23:29.868 } 00:23:29.868 Got JSON-RPC error response 00:23:29.868 response: 00:23:29.868 { 00:23:29.868 "code": -32603, 00:23:29.868 "message": "Internal error" 00:23:29.868 } 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 480224 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 480224 ']' 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 480224 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 480224 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 480224' 00:23:29.868 killing process with pid 480224 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 480224 00:23:29.868 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 480224 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.okA4zgB3py 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=480516 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 480516 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 480516 ']' 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.126 03:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.385 [2024-07-23 03:22:56.734065] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:30.385 [2024-07-23 03:22:56.734150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.385 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.385 [2024-07-23 03:22:56.806206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.385 [2024-07-23 03:22:56.896913] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.385 [2024-07-23 03:22:56.896969] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.385 [2024-07-23 03:22:56.896992] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.385 [2024-07-23 03:22:56.897011] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.385 [2024-07-23 03:22:56.897027] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.385 [2024-07-23 03:22:56.897063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.okA4zgB3py 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.okA4zgB3py 00:23:30.643 03:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:30.901 [2024-07-23 03:22:57.313210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.901 03:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.158 03:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:31.416 [2024-07-23 03:22:57.890736] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.416 [2024-07-23 03:22:57.891010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.416 03:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.674 malloc0 00:23:31.674 03:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:31.931 03:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.okA4zgB3py 00:23:32.189 [2024-07-23 03:22:58.696421] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:32.189 03:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=480803 00:23:32.189 03:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.189 03:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.189 03:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 480803 /var/tmp/bdevperf.sock 00:23:32.189 03:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 480803 ']' 00:23:32.189 03:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.189 03:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.190 03:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.190 03:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.190 03:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.190 [2024-07-23 03:22:58.762465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:32.190 [2024-07-23 03:22:58.762537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480803 ] 00:23:32.447 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.447 [2024-07-23 03:22:58.820369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.447 [2024-07-23 03:22:58.906201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.447 03:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.447 03:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:32.447 03:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.okA4zgB3py 00:23:32.705 [2024-07-23 03:22:59.232127] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.705 [2024-07-23 03:22:59.232246] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:32.963 TLSTESTn1 00:23:32.963 03:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:33.220 03:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:33.220 "subsystems": [ 00:23:33.220 { 00:23:33.220 "subsystem": "keyring", 00:23:33.220 "config": [] 00:23:33.220 }, 00:23:33.220 { 00:23:33.220 "subsystem": "iobuf", 00:23:33.220 "config": [ 00:23:33.220 { 00:23:33.221 "method": "iobuf_set_options", 00:23:33.221 "params": { 00:23:33.221 "small_pool_count": 8192, 00:23:33.221 "large_pool_count": 1024, 00:23:33.221 "small_bufsize": 8192, 00:23:33.221 "large_bufsize": 135168 00:23:33.221 } 00:23:33.221 } 00:23:33.221 ] 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "subsystem": "sock", 00:23:33.221 "config": [ 00:23:33.221 { 00:23:33.221 "method": "sock_set_default_impl", 00:23:33.221 "params": { 00:23:33.221 "impl_name": "posix" 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "sock_impl_set_options", 00:23:33.221 "params": { 00:23:33.221 "impl_name": "ssl", 00:23:33.221 "recv_buf_size": 4096, 00:23:33.221 "send_buf_size": 4096, 00:23:33.221 "enable_recv_pipe": true, 00:23:33.221 "enable_quickack": false, 00:23:33.221 "enable_placement_id": 0, 00:23:33.221 "enable_zerocopy_send_server": true, 00:23:33.221 "enable_zerocopy_send_client": false, 00:23:33.221 "zerocopy_threshold": 0, 00:23:33.221 "tls_version": 0, 00:23:33.221 "enable_ktls": false 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "sock_impl_set_options", 00:23:33.221 "params": { 00:23:33.221 "impl_name": "posix", 00:23:33.221 "recv_buf_size": 2097152, 00:23:33.221 "send_buf_size": 2097152, 00:23:33.221 "enable_recv_pipe": true, 00:23:33.221 "enable_quickack": false, 00:23:33.221 "enable_placement_id": 0, 00:23:33.221 "enable_zerocopy_send_server": true, 00:23:33.221 "enable_zerocopy_send_client": false, 00:23:33.221 "zerocopy_threshold": 0, 00:23:33.221 "tls_version": 0, 00:23:33.221 "enable_ktls": false 00:23:33.221 } 00:23:33.221 } 00:23:33.221 ] 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "subsystem": "vmd", 00:23:33.221 "config": [] 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "subsystem": "accel", 00:23:33.221 "config": [ 00:23:33.221 { 00:23:33.221 "method": "accel_set_options", 00:23:33.221 "params": { 00:23:33.221 "small_cache_size": 128, 00:23:33.221 "large_cache_size": 16, 00:23:33.221 "task_count": 2048, 00:23:33.221 "sequence_count": 2048, 00:23:33.221 "buf_count": 2048 00:23:33.221 } 00:23:33.221 } 00:23:33.221 ] 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "subsystem": "bdev", 00:23:33.221 "config": [ 00:23:33.221 { 00:23:33.221 "method": "bdev_set_options", 00:23:33.221 "params": { 00:23:33.221 "bdev_io_pool_size": 65535, 00:23:33.221 "bdev_io_cache_size": 256, 00:23:33.221 "bdev_auto_examine": true, 00:23:33.221 "iobuf_small_cache_size": 128, 00:23:33.221 "iobuf_large_cache_size": 16 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "bdev_raid_set_options", 00:23:33.221 "params": { 00:23:33.221 "process_window_size_kb": 1024 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "bdev_iscsi_set_options", 00:23:33.221 "params": { 00:23:33.221 "timeout_sec": 30 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "bdev_nvme_set_options", 00:23:33.221 "params": { 00:23:33.221 "action_on_timeout": "none", 00:23:33.221 "timeout_us": 0, 00:23:33.221 "timeout_admin_us": 0, 00:23:33.221 "keep_alive_timeout_ms": 10000, 00:23:33.221 "arbitration_burst": 0, 00:23:33.221 "low_priority_weight": 0, 00:23:33.221 "medium_priority_weight": 0, 00:23:33.221 "high_priority_weight": 0, 00:23:33.221 "nvme_adminq_poll_period_us": 10000, 00:23:33.221 "nvme_ioq_poll_period_us": 0, 00:23:33.221 "io_queue_requests": 0, 00:23:33.221 "delay_cmd_submit": true, 00:23:33.221 "transport_retry_count": 4, 00:23:33.221 "bdev_retry_count": 3, 00:23:33.221 "transport_ack_timeout": 0, 00:23:33.221 "ctrlr_loss_timeout_sec": 0, 00:23:33.221 "reconnect_delay_sec": 0, 00:23:33.221 "fast_io_fail_timeout_sec": 0, 00:23:33.221 "disable_auto_failback": false, 00:23:33.221 "generate_uuids": false, 00:23:33.221 "transport_tos": 0, 00:23:33.221 "nvme_error_stat": false, 00:23:33.221 "rdma_srq_size": 0, 00:23:33.221 "io_path_stat": false, 00:23:33.221 "allow_accel_sequence": false, 00:23:33.221 "rdma_max_cq_size": 0, 00:23:33.221 "rdma_cm_event_timeout_ms": 0, 00:23:33.221 "dhchap_digests": [ 00:23:33.221 "sha256", 00:23:33.221 "sha384", 00:23:33.221 "sha512" 00:23:33.221 ], 00:23:33.221 "dhchap_dhgroups": [ 00:23:33.221 "null", 00:23:33.221 "ffdhe2048", 00:23:33.221 "ffdhe3072", 00:23:33.221 "ffdhe4096", 00:23:33.221 "ffdhe6144", 00:23:33.221 "ffdhe8192" 00:23:33.221 ] 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "bdev_nvme_set_hotplug", 00:23:33.221 "params": { 00:23:33.221 "period_us": 100000, 00:23:33.221 "enable": false 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "bdev_malloc_create", 00:23:33.221 "params": { 00:23:33.221 "name": "malloc0", 00:23:33.221 "num_blocks": 8192, 00:23:33.221 "block_size": 4096, 00:23:33.221 "physical_block_size": 4096, 00:23:33.221 "uuid": "59e8b0ee-f744-47e4-8916-f3fb7ec072b1", 00:23:33.221 "optimal_io_boundary": 0 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "bdev_wait_for_examine" 00:23:33.221 } 00:23:33.221 ] 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "subsystem": "nbd", 00:23:33.221 "config": [] 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "subsystem": "scheduler", 00:23:33.221 "config": [ 00:23:33.221 { 00:23:33.221 "method": "framework_set_scheduler", 00:23:33.221 "params": { 00:23:33.221 "name": "static" 00:23:33.221 } 00:23:33.221 } 00:23:33.221 ] 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "subsystem": "nvmf", 00:23:33.221 "config": [ 00:23:33.221 { 00:23:33.221 "method": "nvmf_set_config", 00:23:33.221 "params": { 00:23:33.221 "discovery_filter": "match_any", 00:23:33.221 "admin_cmd_passthru": { 00:23:33.221 "identify_ctrlr": false 00:23:33.221 } 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "nvmf_set_max_subsystems", 00:23:33.221 "params": { 00:23:33.221 "max_subsystems": 1024 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "nvmf_set_crdt", 00:23:33.221 "params": { 00:23:33.221 "crdt1": 0, 00:23:33.221 "crdt2": 0, 00:23:33.221 "crdt3": 0 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "nvmf_create_transport", 00:23:33.221 "params": { 00:23:33.221 "trtype": "TCP", 00:23:33.221 "max_queue_depth": 128, 00:23:33.221 "max_io_qpairs_per_ctrlr": 127, 00:23:33.221 "in_capsule_data_size": 4096, 00:23:33.221 "max_io_size": 131072, 00:23:33.221 "io_unit_size": 131072, 00:23:33.221 "max_aq_depth": 128, 00:23:33.221 "num_shared_buffers": 511, 00:23:33.221 "buf_cache_size": 4294967295, 00:23:33.221 "dif_insert_or_strip": false, 00:23:33.221 "zcopy": false, 00:23:33.221 "c2h_success": false, 00:23:33.221 "sock_priority": 0, 00:23:33.221 "abort_timeout_sec": 1, 00:23:33.221 "ack_timeout": 0, 00:23:33.221 "data_wr_pool_size": 0 00:23:33.221 } 00:23:33.221 }, 00:23:33.221 { 00:23:33.221 "method": "nvmf_create_subsystem", 00:23:33.221 "params": { 00:23:33.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.221 "allow_any_host": false, 00:23:33.221 "serial_number": "SPDK00000000000001", 00:23:33.221 "model_number": "SPDK bdev Controller", 00:23:33.221 "max_namespaces": 10, 00:23:33.221 "min_cntlid": 1, 00:23:33.222 "max_cntlid": 65519, 00:23:33.222 "ana_reporting": false 00:23:33.222 } 00:23:33.222 }, 00:23:33.222 { 00:23:33.222 "method": "nvmf_subsystem_add_host", 00:23:33.222 "params": { 00:23:33.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.222 "host": "nqn.2016-06.io.spdk:host1", 00:23:33.222 "psk": "/tmp/tmp.okA4zgB3py" 00:23:33.222 } 00:23:33.222 }, 00:23:33.222 { 00:23:33.222 "method": "nvmf_subsystem_add_ns", 00:23:33.222 "params": { 00:23:33.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.222 "namespace": { 00:23:33.222 "nsid": 1, 00:23:33.222 "bdev_name": "malloc0", 00:23:33.222 "nguid": "59E8B0EEF74447E48916F3FB7EC072B1", 00:23:33.222 "uuid": "59e8b0ee-f744-47e4-8916-f3fb7ec072b1", 00:23:33.222 "no_auto_visible": false 00:23:33.222 } 00:23:33.222 } 00:23:33.222 }, 00:23:33.222 { 00:23:33.222 "method": "nvmf_subsystem_add_listener", 00:23:33.222 "params": { 00:23:33.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.222 "listen_address": { 00:23:33.222 "trtype": "TCP", 00:23:33.222 "adrfam": "IPv4", 00:23:33.222 "traddr": "10.0.0.2", 00:23:33.222 "trsvcid": "4420" 00:23:33.222 }, 00:23:33.222 "secure_channel": true 00:23:33.222 } 00:23:33.222 } 00:23:33.222 ] 00:23:33.222 } 00:23:33.222 ] 00:23:33.222 }' 00:23:33.222 03:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:33.509 03:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:33.509 "subsystems": [ 00:23:33.509 { 00:23:33.509 "subsystem": "keyring", 00:23:33.509 "config": [] 00:23:33.509 }, 00:23:33.509 { 00:23:33.509 "subsystem": "iobuf", 00:23:33.509 "config": [ 00:23:33.509 { 00:23:33.509 "method": "iobuf_set_options", 00:23:33.509 "params": { 00:23:33.509 "small_pool_count": 8192, 00:23:33.509 "large_pool_count": 1024, 00:23:33.509 "small_bufsize": 8192, 00:23:33.509 "large_bufsize": 135168 00:23:33.509 } 00:23:33.509 } 00:23:33.509 ] 00:23:33.509 }, 00:23:33.509 { 00:23:33.509 "subsystem": "sock", 00:23:33.509 "config": [ 00:23:33.509 { 00:23:33.509 "method": "sock_set_default_impl", 00:23:33.509 "params": { 00:23:33.509 "impl_name": "posix" 00:23:33.509 } 00:23:33.509 }, 00:23:33.509 { 00:23:33.509 "method": "sock_impl_set_options", 00:23:33.509 "params": { 00:23:33.509 "impl_name": "ssl", 00:23:33.509 "recv_buf_size": 4096, 00:23:33.509 "send_buf_size": 4096, 00:23:33.509 "enable_recv_pipe": true, 00:23:33.509 "enable_quickack": false, 00:23:33.509 "enable_placement_id": 0, 00:23:33.509 "enable_zerocopy_send_server": true, 00:23:33.509 "enable_zerocopy_send_client": false, 00:23:33.509 "zerocopy_threshold": 0, 00:23:33.509 "tls_version": 0, 00:23:33.509 "enable_ktls": false 00:23:33.509 } 00:23:33.509 }, 00:23:33.509 { 00:23:33.509 "method": "sock_impl_set_options", 00:23:33.509 "params": { 00:23:33.509 "impl_name": "posix", 00:23:33.509 "recv_buf_size": 2097152, 00:23:33.509 "send_buf_size": 2097152, 00:23:33.509 "enable_recv_pipe": true, 00:23:33.509 "enable_quickack": false, 00:23:33.509 "enable_placement_id": 0, 00:23:33.509 "enable_zerocopy_send_server": true, 00:23:33.509 "enable_zerocopy_send_client": false, 00:23:33.509 "zerocopy_threshold": 0, 00:23:33.509 "tls_version": 0, 00:23:33.509 "enable_ktls": false 00:23:33.509 } 00:23:33.509 } 00:23:33.509 ] 00:23:33.509 }, 00:23:33.509 { 00:23:33.509 "subsystem": "vmd", 00:23:33.509 "config": [] 00:23:33.509 }, 00:23:33.509 { 00:23:33.509 "subsystem": "accel", 00:23:33.509 "config": [ 00:23:33.509 { 00:23:33.509 "method": "accel_set_options", 00:23:33.509 "params": { 00:23:33.509 "small_cache_size": 128, 00:23:33.509 "large_cache_size": 16, 00:23:33.509 "task_count": 2048, 00:23:33.509 "sequence_count": 2048, 00:23:33.509 "buf_count": 2048 00:23:33.509 } 00:23:33.509 } 00:23:33.509 ] 00:23:33.509 }, 00:23:33.509 { 00:23:33.509 "subsystem": "bdev", 00:23:33.509 "config": [ 00:23:33.509 { 00:23:33.509 "method": "bdev_set_options", 00:23:33.509 "params": { 00:23:33.509 "bdev_io_pool_size": 65535, 00:23:33.510 "bdev_io_cache_size": 256, 00:23:33.510 "bdev_auto_examine": true, 00:23:33.510 "iobuf_small_cache_size": 128, 00:23:33.510 "iobuf_large_cache_size": 16 00:23:33.510 } 00:23:33.510 }, 00:23:33.510 { 00:23:33.510 "method": "bdev_raid_set_options", 00:23:33.510 "params": { 00:23:33.510 "process_window_size_kb": 1024 00:23:33.510 } 00:23:33.510 }, 00:23:33.510 { 00:23:33.510 "method": "bdev_iscsi_set_options", 00:23:33.510 "params": { 00:23:33.510 "timeout_sec": 30 00:23:33.510 } 00:23:33.510 }, 00:23:33.510 { 00:23:33.510 "method": "bdev_nvme_set_options", 00:23:33.510 "params": { 00:23:33.510 "action_on_timeout": "none", 00:23:33.510 "timeout_us": 0, 00:23:33.510 "timeout_admin_us": 0, 00:23:33.510 "keep_alive_timeout_ms": 10000, 00:23:33.510 "arbitration_burst": 0, 00:23:33.510 "low_priority_weight": 0, 00:23:33.510 "medium_priority_weight": 0, 00:23:33.510 "high_priority_weight": 0, 00:23:33.510 "nvme_adminq_poll_period_us": 10000, 00:23:33.510 "nvme_ioq_poll_period_us": 0, 00:23:33.510 "io_queue_requests": 512, 00:23:33.510 "delay_cmd_submit": true, 00:23:33.510 "transport_retry_count": 4, 00:23:33.510 "bdev_retry_count": 3, 00:23:33.510 "transport_ack_timeout": 0, 00:23:33.510 "ctrlr_loss_timeout_sec": 0, 00:23:33.510 "reconnect_delay_sec": 0, 00:23:33.510 "fast_io_fail_timeout_sec": 0, 00:23:33.510 "disable_auto_failback": false, 00:23:33.510 "generate_uuids": false, 00:23:33.510 "transport_tos": 0, 00:23:33.510 "nvme_error_stat": false, 00:23:33.510 "rdma_srq_size": 0, 00:23:33.510 "io_path_stat": false, 00:23:33.510 "allow_accel_sequence": false, 00:23:33.510 "rdma_max_cq_size": 0, 00:23:33.510 "rdma_cm_event_timeout_ms": 0, 00:23:33.510 "dhchap_digests": [ 00:23:33.510 "sha256", 00:23:33.510 "sha384", 00:23:33.510 "sha512" 00:23:33.510 ], 00:23:33.510 "dhchap_dhgroups": [ 00:23:33.510 "null", 00:23:33.510 "ffdhe2048", 00:23:33.510 "ffdhe3072", 00:23:33.510 "ffdhe4096", 00:23:33.510 "ffdhe6144", 00:23:33.510 "ffdhe8192" 00:23:33.510 ] 00:23:33.510 } 00:23:33.510 }, 00:23:33.510 { 00:23:33.510 "method": "bdev_nvme_attach_controller", 00:23:33.510 "params": { 00:23:33.510 "name": "TLSTEST", 00:23:33.510 "trtype": "TCP", 00:23:33.510 "adrfam": "IPv4", 00:23:33.510 "traddr": "10.0.0.2", 00:23:33.510 "trsvcid": "4420", 00:23:33.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.510 "prchk_reftag": false, 00:23:33.510 "prchk_guard": false, 00:23:33.510 "ctrlr_loss_timeout_sec": 0, 00:23:33.510 "reconnect_delay_sec": 0, 00:23:33.510 "fast_io_fail_timeout_sec": 0, 00:23:33.510 "psk": "/tmp/tmp.okA4zgB3py", 00:23:33.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.510 "hdgst": false, 00:23:33.510 "ddgst": false 00:23:33.510 } 00:23:33.510 }, 00:23:33.510 { 00:23:33.510 "method": "bdev_nvme_set_hotplug", 00:23:33.510 "params": { 00:23:33.510 "period_us": 100000, 00:23:33.510 "enable": false 00:23:33.510 } 00:23:33.510 }, 00:23:33.510 { 00:23:33.510 "method": "bdev_wait_for_examine" 00:23:33.510 } 00:23:33.510 ] 00:23:33.510 }, 00:23:33.510 { 00:23:33.510 "subsystem": "nbd", 00:23:33.510 "config": [] 00:23:33.510 } 00:23:33.510 ] 00:23:33.510 }' 00:23:33.510 03:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 480803 00:23:33.510 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 480803 ']' 00:23:33.510 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 480803 00:23:33.510 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.510 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.510 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 480803 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 480803' 00:23:33.773 killing process with pid 480803 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 480803 00:23:33.773 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.773 00:23:33.773 Latency(us) 00:23:33.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.773 =================================================================================================================== 00:23:33.773 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:33.773 [2024-07-23 03:23:00.077071] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 480803 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 480516 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 480516 ']' 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 480516 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 480516 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 480516' 00:23:33.773 killing process with pid 480516 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 480516 00:23:33.773 [2024-07-23 03:23:00.329273] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:33.773 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 480516 00:23:34.032 03:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:34.032 03:23:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.032 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:34.032 03:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:34.032 "subsystems": [ 00:23:34.032 { 00:23:34.032 "subsystem": "keyring", 00:23:34.032 "config": [] 00:23:34.032 }, 00:23:34.032 { 00:23:34.032 "subsystem": "iobuf", 00:23:34.032 "config": [ 00:23:34.032 { 00:23:34.032 "method": "iobuf_set_options", 00:23:34.032 "params": { 00:23:34.032 "small_pool_count": 8192, 00:23:34.032 "large_pool_count": 1024, 00:23:34.032 "small_bufsize": 8192, 00:23:34.032 "large_bufsize": 135168 00:23:34.032 } 00:23:34.032 } 00:23:34.032 ] 00:23:34.032 }, 00:23:34.032 { 00:23:34.032 "subsystem": "sock", 00:23:34.032 "config": [ 00:23:34.032 { 00:23:34.032 "method": "sock_set_default_impl", 00:23:34.032 "params": { 00:23:34.032 "impl_name": "posix" 00:23:34.032 } 00:23:34.032 }, 00:23:34.032 { 00:23:34.032 "method": "sock_impl_set_options", 00:23:34.032 "params": { 00:23:34.032 "impl_name": "ssl", 00:23:34.032 "recv_buf_size": 4096, 00:23:34.032 "send_buf_size": 4096, 00:23:34.032 "enable_recv_pipe": true, 00:23:34.032 "enable_quickack": false, 00:23:34.032 "enable_placement_id": 0, 00:23:34.032 "enable_zerocopy_send_server": true, 00:23:34.032 "enable_zerocopy_send_client": false, 00:23:34.032 "zerocopy_threshold": 0, 00:23:34.032 "tls_version": 0, 00:23:34.032 "enable_ktls": false 00:23:34.032 } 00:23:34.032 }, 00:23:34.032 { 00:23:34.032 "method": "sock_impl_set_options", 00:23:34.032 "params": { 00:23:34.032 "impl_name": "posix", 00:23:34.032 "recv_buf_size": 2097152, 00:23:34.032 "send_buf_size": 2097152, 00:23:34.032 "enable_recv_pipe": true, 00:23:34.032 "enable_quickack": false, 00:23:34.032 "enable_placement_id": 0, 00:23:34.032 "enable_zerocopy_send_server": true, 00:23:34.032 "enable_zerocopy_send_client": false, 00:23:34.032 "zerocopy_threshold": 0, 00:23:34.032 "tls_version": 0, 00:23:34.032 "enable_ktls": false 00:23:34.032 } 00:23:34.032 } 00:23:34.032 ] 00:23:34.032 }, 00:23:34.032 { 00:23:34.032 "subsystem": "vmd", 00:23:34.032 "config": [] 00:23:34.032 }, 00:23:34.032 { 00:23:34.032 "subsystem": "accel", 00:23:34.032 "config": [ 00:23:34.032 { 00:23:34.032 "method": "accel_set_options", 00:23:34.032 "params": { 00:23:34.032 "small_cache_size": 128, 00:23:34.032 "large_cache_size": 16, 00:23:34.032 "task_count": 2048, 00:23:34.032 "sequence_count": 2048, 00:23:34.032 "buf_count": 2048 00:23:34.032 } 00:23:34.032 } 00:23:34.032 ] 00:23:34.032 }, 00:23:34.032 { 00:23:34.032 "subsystem": "bdev", 00:23:34.033 "config": [ 00:23:34.033 { 00:23:34.033 "method": "bdev_set_options", 00:23:34.033 "params": { 00:23:34.033 "bdev_io_pool_size": 65535, 00:23:34.033 "bdev_io_cache_size": 256, 00:23:34.033 "bdev_auto_examine": true, 00:23:34.033 "iobuf_small_cache_size": 128, 00:23:34.033 "iobuf_large_cache_size": 16 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "bdev_raid_set_options", 00:23:34.033 "params": { 00:23:34.033 "process_window_size_kb": 1024 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "bdev_iscsi_set_options", 00:23:34.033 "params": { 00:23:34.033 "timeout_sec": 30 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "bdev_nvme_set_options", 00:23:34.033 "params": { 00:23:34.033 "action_on_timeout": "none", 00:23:34.033 "timeout_us": 0, 00:23:34.033 "timeout_admin_us": 0, 00:23:34.033 "keep_alive_timeout_ms": 10000, 00:23:34.033 "arbitration_burst": 0, 00:23:34.033 "low_priority_weight": 0, 00:23:34.033 "medium_priority_weight": 0, 00:23:34.033 "high_priority_weight": 0, 00:23:34.033 "nvme_adminq_poll_period_us": 10000, 00:23:34.033 "nvme_ioq_poll_period_us": 0, 00:23:34.033 "io_queue_requests": 0, 00:23:34.033 "delay_cmd_submit": true, 00:23:34.033 "transport_retry_count": 4, 00:23:34.033 "bdev_retry_count": 3, 00:23:34.033 "transport_ack_timeout": 0, 00:23:34.033 "ctrlr_loss_timeout_sec": 0, 00:23:34.033 "reconnect_delay_sec": 0, 00:23:34.033 "fast_io_fail_timeout_sec": 0, 00:23:34.033 "disable_auto_failback": false, 00:23:34.033 "generate_uuids": false, 00:23:34.033 "transport_tos": 0, 00:23:34.033 "nvme_error_stat": false, 00:23:34.033 "rdma_srq_size": 0, 00:23:34.033 "io_path_stat": false, 00:23:34.033 "allow_accel_sequence": false, 00:23:34.033 "rdma_max_cq_size": 0, 00:23:34.033 "rdma_cm_event_timeout_ms": 0, 00:23:34.033 "dhchap_digests": [ 00:23:34.033 "sha256", 00:23:34.033 "sha384", 00:23:34.033 "sha512" 00:23:34.033 ], 00:23:34.033 "dhchap_dhgroups": [ 00:23:34.033 "null", 00:23:34.033 "ffdhe2048", 00:23:34.033 "ffdhe3072", 00:23:34.033 "ffdhe4096", 00:23:34.033 "ffdhe6144", 00:23:34.033 "ffdhe8192" 00:23:34.033 ] 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "bdev_nvme_set_hotplug", 00:23:34.033 "params": { 00:23:34.033 "period_us": 100000, 00:23:34.033 "enable": false 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "bdev_malloc_create", 00:23:34.033 "params": { 00:23:34.033 "name": "malloc0", 00:23:34.033 "num_blocks": 8192, 00:23:34.033 "block_size": 4096, 00:23:34.033 "physical_block_size": 4096, 00:23:34.033 "uuid": "59e8b0ee-f744-47e4-8916-f3fb7ec072b1", 00:23:34.033 "optimal_io_boundary": 0 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "bdev_wait_for_examine" 00:23:34.033 } 00:23:34.033 ] 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "subsystem": "nbd", 00:23:34.033 "config": [] 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "subsystem": "scheduler", 00:23:34.033 "config": [ 00:23:34.033 { 00:23:34.033 "method": "framework_set_scheduler", 00:23:34.033 "params": { 00:23:34.033 "name": "static" 00:23:34.033 } 00:23:34.033 } 00:23:34.033 ] 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "subsystem": "nvmf", 00:23:34.033 "config": [ 00:23:34.033 { 00:23:34.033 "method": "nvmf_set_config", 00:23:34.033 "params": { 00:23:34.033 "discovery_filter": "match_any", 00:23:34.033 "admin_cmd_passthru": { 00:23:34.033 "identify_ctrlr": false 00:23:34.033 } 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "nvmf_set_max_subsystems", 00:23:34.033 "params": { 00:23:34.033 "max_subsystems": 1024 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "nvmf_set_crdt", 00:23:34.033 "params": { 00:23:34.033 "crdt1": 0, 00:23:34.033 "crdt2": 0, 00:23:34.033 "crdt3": 0 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "nvmf_create_transport", 00:23:34.033 "params": { 00:23:34.033 "trtype": "TCP", 00:23:34.033 "max_queue_depth": 128, 00:23:34.033 "max_io_qpairs_per_ctrlr": 127, 00:23:34.033 "in_capsule_data_size": 4096, 00:23:34.033 "max_io_size": 131072, 00:23:34.033 "io_unit_size": 131072, 00:23:34.033 "max_aq_depth": 128, 00:23:34.033 "num_shared_buffers": 511, 00:23:34.033 "buf_cache_size": 4294967295, 00:23:34.033 "dif_insert_or_strip": false, 00:23:34.033 "zcopy": false, 00:23:34.033 "c2h_success": false, 00:23:34.033 "sock_priority": 0, 00:23:34.033 "abort_timeout_sec": 1, 00:23:34.033 "ack_timeout": 0, 00:23:34.033 "data_wr_pool_size": 0 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "nvmf_create_subsystem", 00:23:34.033 "params": { 00:23:34.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.033 "allow_any_host": false, 00:23:34.033 "serial_number": "SPDK00000000000001", 00:23:34.033 "model_number": "SPDK bdev Controller", 00:23:34.033 "max_namespaces": 10, 00:23:34.033 "min_cntlid": 1, 00:23:34.033 "max_cntlid": 65519, 00:23:34.033 "ana_reporting": false 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "nvmf_subsystem_add_host", 00:23:34.033 "params": { 00:23:34.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.033 "host": "nqn.2016-06.io.spdk:host1", 00:23:34.033 "psk": "/tmp/tmp.okA4zgB3py" 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "nvmf_subsystem_add_ns", 00:23:34.033 "params": { 00:23:34.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.033 "namespace": { 00:23:34.033 "nsid": 1, 00:23:34.033 "bdev_name": "malloc0", 00:23:34.033 "nguid": "59E8B0EEF74447E48916F3FB7EC072B1", 00:23:34.033 "uuid": "59e8b0ee-f744-47e4-8916-f3fb7ec072b1", 00:23:34.033 "no_auto_visible": false 00:23:34.033 } 00:23:34.033 } 00:23:34.033 }, 00:23:34.033 { 00:23:34.033 "method": "nvmf_subsystem_add_listener", 00:23:34.033 "params": { 00:23:34.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.033 "listen_address": { 00:23:34.033 "trtype": "TCP", 00:23:34.033 "adrfam": "IPv4", 00:23:34.033 "traddr": "10.0.0.2", 00:23:34.033 "trsvcid": "4420" 00:23:34.033 }, 00:23:34.033 "secure_channel": true 00:23:34.033 } 00:23:34.033 } 00:23:34.033 ] 00:23:34.033 } 00:23:34.033 ] 00:23:34.033 }' 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=480968 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 480968 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 480968 ']' 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.033 03:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.292 [2024-07-23 03:23:00.623623] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:34.292 [2024-07-23 03:23:00.623703] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.292 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.292 [2024-07-23 03:23:00.690408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.292 [2024-07-23 03:23:00.781303] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.292 [2024-07-23 03:23:00.781365] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.292 [2024-07-23 03:23:00.781392] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.292 [2024-07-23 03:23:00.781413] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.292 [2024-07-23 03:23:00.781431] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.292 [2024-07-23 03:23:00.781538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.550 [2024-07-23 03:23:01.010790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.550 [2024-07-23 03:23:01.026724] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:34.550 [2024-07-23 03:23:01.042789] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.550 [2024-07-23 03:23:01.050802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=481116 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 481116 /var/tmp/bdevperf.sock 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 481116 ']' 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.117 03:23:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:35.117 "subsystems": [ 00:23:35.117 { 00:23:35.117 "subsystem": "keyring", 00:23:35.117 "config": [] 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "subsystem": "iobuf", 00:23:35.117 "config": [ 00:23:35.117 { 00:23:35.117 "method": "iobuf_set_options", 00:23:35.117 "params": { 00:23:35.117 "small_pool_count": 8192, 00:23:35.117 "large_pool_count": 1024, 00:23:35.117 "small_bufsize": 8192, 00:23:35.117 "large_bufsize": 135168 00:23:35.117 } 00:23:35.117 } 00:23:35.117 ] 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "subsystem": "sock", 00:23:35.117 "config": [ 00:23:35.117 { 00:23:35.117 "method": "sock_set_default_impl", 00:23:35.117 "params": { 00:23:35.117 "impl_name": "posix" 00:23:35.117 } 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "method": "sock_impl_set_options", 00:23:35.117 "params": { 00:23:35.117 "impl_name": "ssl", 00:23:35.117 "recv_buf_size": 4096, 00:23:35.117 "send_buf_size": 4096, 00:23:35.117 "enable_recv_pipe": true, 00:23:35.117 "enable_quickack": false, 00:23:35.117 "enable_placement_id": 0, 00:23:35.117 "enable_zerocopy_send_server": true, 00:23:35.117 "enable_zerocopy_send_client": false, 00:23:35.117 "zerocopy_threshold": 0, 00:23:35.117 "tls_version": 0, 00:23:35.117 "enable_ktls": false 00:23:35.117 } 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "method": "sock_impl_set_options", 00:23:35.117 "params": { 00:23:35.117 "impl_name": "posix", 00:23:35.117 "recv_buf_size": 2097152, 00:23:35.117 "send_buf_size": 2097152, 00:23:35.117 "enable_recv_pipe": true, 00:23:35.117 "enable_quickack": false, 00:23:35.117 "enable_placement_id": 0, 00:23:35.117 "enable_zerocopy_send_server": true, 00:23:35.117 "enable_zerocopy_send_client": false, 00:23:35.117 "zerocopy_threshold": 0, 00:23:35.117 "tls_version": 0, 00:23:35.117 "enable_ktls": false 00:23:35.117 } 00:23:35.117 } 00:23:35.117 ] 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "subsystem": "vmd", 00:23:35.117 "config": [] 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "subsystem": "accel", 00:23:35.117 "config": [ 00:23:35.117 { 00:23:35.117 "method": "accel_set_options", 00:23:35.117 "params": { 00:23:35.117 "small_cache_size": 128, 00:23:35.117 "large_cache_size": 16, 00:23:35.117 "task_count": 2048, 00:23:35.117 "sequence_count": 2048, 00:23:35.117 "buf_count": 2048 00:23:35.117 } 00:23:35.117 } 00:23:35.117 ] 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "subsystem": "bdev", 00:23:35.117 "config": [ 00:23:35.117 { 00:23:35.117 "method": "bdev_set_options", 00:23:35.117 "params": { 00:23:35.117 "bdev_io_pool_size": 65535, 00:23:35.117 "bdev_io_cache_size": 256, 00:23:35.117 "bdev_auto_examine": true, 00:23:35.117 "iobuf_small_cache_size": 128, 00:23:35.117 "iobuf_large_cache_size": 16 00:23:35.117 } 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "method": "bdev_raid_set_options", 00:23:35.117 "params": { 00:23:35.117 "process_window_size_kb": 1024 00:23:35.117 } 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "method": "bdev_iscsi_set_options", 00:23:35.117 "params": { 00:23:35.117 "timeout_sec": 30 00:23:35.117 } 00:23:35.117 }, 00:23:35.117 { 00:23:35.117 "method": "bdev_nvme_set_options", 00:23:35.117 "params": { 00:23:35.117 "action_on_timeout": "none", 00:23:35.117 "timeout_us": 0, 00:23:35.117 "timeout_admin_us": 0, 00:23:35.117 "keep_alive_timeout_ms": 10000, 00:23:35.117 "arbitration_burst": 0, 00:23:35.117 "low_priority_weight": 0, 00:23:35.117 "medium_priority_weight": 0, 00:23:35.117 "high_priority_weight": 0, 00:23:35.117 "nvme_adminq_poll_period_us": 10000, 00:23:35.117 "nvme_ioq_poll_period_us": 0, 00:23:35.117 "io_queue_requests": 512, 00:23:35.117 "delay_cmd_submit": true, 00:23:35.117 "transport_retry_count": 4, 00:23:35.117 "bdev_retry_count": 3, 00:23:35.117 "transport_ack_timeout": 0, 00:23:35.117 "ctrlr_loss_timeout_sec": 0, 00:23:35.117 "reconnect_delay_sec": 0, 00:23:35.117 "fast_io_fail_timeout_sec": 0, 00:23:35.117 "disable_auto_failback": false, 00:23:35.117 "generate_uuids": false, 00:23:35.117 "transport_tos": 0, 00:23:35.117 "nvme_error_stat": false, 00:23:35.117 "rdma_srq_size": 0, 00:23:35.117 "io_path_stat": false, 00:23:35.117 "allow_accel_sequence": false, 00:23:35.117 "rdma_max_cq_size": 0, 00:23:35.117 "rdma_cm_event_timeout_ms": 0, 00:23:35.117 "dhchap_digests": [ 00:23:35.118 "sha256", 00:23:35.118 "sha384", 00:23:35.118 "sha512" 00:23:35.118 ], 00:23:35.118 "dhchap_dhgroups": [ 00:23:35.118 "null", 00:23:35.118 "ffdhe2048", 00:23:35.118 "ffdhe3072", 00:23:35.118 "ffdhe4096", 00:23:35.118 "ffdhe6144", 00:23:35.118 "ffdhe8192" 00:23:35.118 ] 00:23:35.118 } 00:23:35.118 }, 00:23:35.118 { 00:23:35.118 "method": "bdev_nvme_attach_controller", 00:23:35.118 "params": { 00:23:35.118 "name": "TLSTEST", 00:23:35.118 "trtype": "TCP", 00:23:35.118 "adrfam": "IPv4", 00:23:35.118 "traddr": "10.0.0.2", 00:23:35.118 "trsvcid": "4420", 00:23:35.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.118 "prchk_reftag": false, 00:23:35.118 "prchk_guard": false, 00:23:35.118 "ctrlr_loss_timeout_sec": 0, 00:23:35.118 "reconnect_delay_sec": 0, 00:23:35.118 "fast_io_fail_timeout_sec": 0, 00:23:35.118 "psk": "/tmp/tmp.okA4zgB3py", 00:23:35.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.118 "hdgst": false, 00:23:35.118 "ddgst": false 00:23:35.118 } 00:23:35.118 }, 00:23:35.118 { 00:23:35.118 "method": "bdev_nvme_set_hotplug", 00:23:35.118 "params": { 00:23:35.118 "period_us": 100000, 00:23:35.118 "enable": false 00:23:35.118 } 00:23:35.118 }, 00:23:35.118 { 00:23:35.118 "method": "bdev_wait_for_examine" 00:23:35.118 } 00:23:35.118 ] 00:23:35.118 }, 00:23:35.118 { 00:23:35.118 "subsystem": "nbd", 00:23:35.118 "config": [] 00:23:35.118 } 00:23:35.118 ] 00:23:35.118 }' 00:23:35.118 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.118 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.118 03:23:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.118 [2024-07-23 03:23:01.674502] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:35.118 [2024-07-23 03:23:01.674575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481116 ] 00:23:35.376 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.376 [2024-07-23 03:23:01.732276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.376 [2024-07-23 03:23:01.814846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.634 [2024-07-23 03:23:01.977450] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.634 [2024-07-23 03:23:01.977570] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:36.200 03:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:36.200 03:23:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:36.200 03:23:02 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:36.200 Running I/O for 10 seconds... 00:23:48.398 00:23:48.398 Latency(us) 00:23:48.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.398 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.398 Verification LBA range: start 0x0 length 0x2000 00:23:48.398 TLSTESTn1 : 10.05 1943.68 7.59 0.00 0.00 65679.85 6990.51 92430.03 00:23:48.398 =================================================================================================================== 00:23:48.398 Total : 1943.68 7.59 0.00 0.00 65679.85 6990.51 92430.03 00:23:48.398 0 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 481116 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 481116 ']' 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 481116 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 481116 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 481116' 00:23:48.398 killing process with pid 481116 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 481116 00:23:48.398 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.398 00:23:48.398 Latency(us) 00:23:48.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.398 =================================================================================================================== 00:23:48.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.398 [2024-07-23 03:23:12.831398] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:48.398 03:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 481116 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 480968 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 480968 ']' 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 480968 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 480968 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 480968' 00:23:48.398 killing process with pid 480968 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 480968 00:23:48.398 [2024-07-23 03:23:13.078996] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 480968 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=482448 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 482448 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 482448 ']' 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.398 [2024-07-23 03:23:13.369470] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:48.398 [2024-07-23 03:23:13.369547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.398 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.398 [2024-07-23 03:23:13.435157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.398 [2024-07-23 03:23:13.522368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.398 [2024-07-23 03:23:13.522437] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.398 [2024-07-23 03:23:13.522451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.398 [2024-07-23 03:23:13.522462] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.398 [2024-07-23 03:23:13.522472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.398 [2024-07-23 03:23:13.522496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.okA4zgB3py 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.okA4zgB3py 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.398 [2024-07-23 03:23:13.873131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.398 03:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:48.398 03:23:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:48.398 [2024-07-23 03:23:14.402626] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:48.398 [2024-07-23 03:23:14.402886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.398 03:23:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:48.398 malloc0 00:23:48.398 03:23:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:48.398 03:23:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.okA4zgB3py 00:23:48.656 [2024-07-23 03:23:15.224758] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=482727 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 482727 /var/tmp/bdevperf.sock 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 482727 ']' 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:48.914 03:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.914 [2024-07-23 03:23:15.284992] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:48.914 [2024-07-23 03:23:15.285081] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482727 ] 00:23:48.914 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.914 [2024-07-23 03:23:15.344252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.914 [2024-07-23 03:23:15.429750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.172 03:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:49.172 03:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:49.172 03:23:15 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.okA4zgB3py 00:23:49.430 03:23:15 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:49.687 [2024-07-23 03:23:16.070974] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.687 nvme0n1 00:23:49.687 03:23:16 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:49.945 Running I/O for 1 seconds... 00:23:50.877 00:23:50.877 Latency(us) 00:23:50.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.877 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:50.877 Verification LBA range: start 0x0 length 0x2000 00:23:50.877 nvme0n1 : 1.06 1933.20 7.55 0.00 0.00 64662.16 9709.04 105634.32 00:23:50.877 =================================================================================================================== 00:23:50.877 Total : 1933.20 7.55 0.00 0.00 64662.16 9709.04 105634.32 00:23:50.877 0 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 482727 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 482727 ']' 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 482727 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 482727 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 482727' 00:23:50.877 killing process with pid 482727 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 482727 00:23:50.877 Received shutdown signal, test time was about 1.000000 seconds 00:23:50.877 00:23:50.877 Latency(us) 00:23:50.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.877 =================================================================================================================== 00:23:50.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.877 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 482727 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 482448 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 482448 ']' 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 482448 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 482448 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 482448' 00:23:51.134 killing process with pid 482448 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 482448 00:23:51.134 [2024-07-23 03:23:17.603560] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:51.134 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 482448 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=483006 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 483006 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 483006 ']' 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.392 03:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.392 [2024-07-23 03:23:17.901672] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:51.392 [2024-07-23 03:23:17.901756] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.392 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.650 [2024-07-23 03:23:17.986920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.650 [2024-07-23 03:23:18.082426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.650 [2024-07-23 03:23:18.082503] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.650 [2024-07-23 03:23:18.082543] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.650 [2024-07-23 03:23:18.082567] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.650 [2024-07-23 03:23:18.082587] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.650 [2024-07-23 03:23:18.082640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.908 [2024-07-23 03:23:18.258319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.908 malloc0 00:23:51.908 [2024-07-23 03:23:18.290925] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.908 [2024-07-23 03:23:18.291182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=483149 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 483149 /var/tmp/bdevperf.sock 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 483149 ']' 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.908 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.908 [2024-07-23 03:23:18.362702] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:51.908 [2024-07-23 03:23:18.362773] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483149 ] 00:23:51.908 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.908 [2024-07-23 03:23:18.429748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.167 [2024-07-23 03:23:18.521260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.167 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:52.167 03:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:52.167 03:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.okA4zgB3py 00:23:52.425 03:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:52.683 [2024-07-23 03:23:19.111829] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.683 nvme0n1 00:23:52.683 03:23:19 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.941 Running I/O for 1 seconds... 00:23:53.876 00:23:53.876 Latency(us) 00:23:53.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.876 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:53.876 Verification LBA range: start 0x0 length 0x2000 00:23:53.876 nvme0n1 : 1.07 1339.44 5.23 0.00 0.00 93320.28 10000.31 121945.51 00:23:53.876 =================================================================================================================== 00:23:53.876 Total : 1339.44 5.23 0.00 0.00 93320.28 10000.31 121945.51 00:23:53.876 0 00:23:53.876 03:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:53.876 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.876 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.135 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.135 03:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:54.135 "subsystems": [ 00:23:54.135 { 00:23:54.135 "subsystem": "keyring", 00:23:54.135 "config": [ 00:23:54.135 { 00:23:54.135 "method": "keyring_file_add_key", 00:23:54.135 "params": { 00:23:54.135 "name": "key0", 00:23:54.135 "path": "/tmp/tmp.okA4zgB3py" 00:23:54.135 } 00:23:54.135 } 00:23:54.135 ] 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "subsystem": "iobuf", 00:23:54.135 "config": [ 00:23:54.135 { 00:23:54.135 "method": "iobuf_set_options", 00:23:54.135 "params": { 00:23:54.135 "small_pool_count": 8192, 00:23:54.135 "large_pool_count": 1024, 00:23:54.135 "small_bufsize": 8192, 00:23:54.135 "large_bufsize": 135168 00:23:54.135 } 00:23:54.135 } 00:23:54.135 ] 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "subsystem": "sock", 00:23:54.135 "config": [ 00:23:54.135 { 00:23:54.135 "method": "sock_set_default_impl", 00:23:54.135 "params": { 00:23:54.135 "impl_name": "posix" 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "sock_impl_set_options", 00:23:54.135 "params": { 00:23:54.135 "impl_name": "ssl", 00:23:54.135 "recv_buf_size": 4096, 00:23:54.135 "send_buf_size": 4096, 00:23:54.135 "enable_recv_pipe": true, 00:23:54.135 "enable_quickack": false, 00:23:54.135 "enable_placement_id": 0, 00:23:54.135 "enable_zerocopy_send_server": true, 00:23:54.135 "enable_zerocopy_send_client": false, 00:23:54.135 "zerocopy_threshold": 0, 00:23:54.135 "tls_version": 0, 00:23:54.135 "enable_ktls": false 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "sock_impl_set_options", 00:23:54.135 "params": { 00:23:54.135 "impl_name": "posix", 00:23:54.135 "recv_buf_size": 2097152, 00:23:54.135 "send_buf_size": 2097152, 00:23:54.135 "enable_recv_pipe": true, 00:23:54.135 "enable_quickack": false, 00:23:54.135 "enable_placement_id": 0, 00:23:54.135 "enable_zerocopy_send_server": true, 00:23:54.135 "enable_zerocopy_send_client": false, 00:23:54.135 "zerocopy_threshold": 0, 00:23:54.135 "tls_version": 0, 00:23:54.135 "enable_ktls": false 00:23:54.135 } 00:23:54.135 } 00:23:54.135 ] 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "subsystem": "vmd", 00:23:54.135 "config": [] 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "subsystem": "accel", 00:23:54.135 "config": [ 00:23:54.135 { 00:23:54.135 "method": "accel_set_options", 00:23:54.135 "params": { 00:23:54.135 "small_cache_size": 128, 00:23:54.135 "large_cache_size": 16, 00:23:54.135 "task_count": 2048, 00:23:54.135 "sequence_count": 2048, 00:23:54.135 "buf_count": 2048 00:23:54.135 } 00:23:54.135 } 00:23:54.135 ] 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "subsystem": "bdev", 00:23:54.135 "config": [ 00:23:54.135 { 00:23:54.135 "method": "bdev_set_options", 00:23:54.135 "params": { 00:23:54.135 "bdev_io_pool_size": 65535, 00:23:54.135 "bdev_io_cache_size": 256, 00:23:54.135 "bdev_auto_examine": true, 00:23:54.135 "iobuf_small_cache_size": 128, 00:23:54.135 "iobuf_large_cache_size": 16 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "bdev_raid_set_options", 00:23:54.135 "params": { 00:23:54.135 "process_window_size_kb": 1024 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "bdev_iscsi_set_options", 00:23:54.135 "params": { 00:23:54.135 "timeout_sec": 30 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "bdev_nvme_set_options", 00:23:54.135 "params": { 00:23:54.135 "action_on_timeout": "none", 00:23:54.135 "timeout_us": 0, 00:23:54.135 "timeout_admin_us": 0, 00:23:54.135 "keep_alive_timeout_ms": 10000, 00:23:54.135 "arbitration_burst": 0, 00:23:54.135 "low_priority_weight": 0, 00:23:54.135 "medium_priority_weight": 0, 00:23:54.135 "high_priority_weight": 0, 00:23:54.135 "nvme_adminq_poll_period_us": 10000, 00:23:54.135 "nvme_ioq_poll_period_us": 0, 00:23:54.135 "io_queue_requests": 0, 00:23:54.135 "delay_cmd_submit": true, 00:23:54.135 "transport_retry_count": 4, 00:23:54.135 "bdev_retry_count": 3, 00:23:54.135 "transport_ack_timeout": 0, 00:23:54.135 "ctrlr_loss_timeout_sec": 0, 00:23:54.135 "reconnect_delay_sec": 0, 00:23:54.135 "fast_io_fail_timeout_sec": 0, 00:23:54.135 "disable_auto_failback": false, 00:23:54.135 "generate_uuids": false, 00:23:54.135 "transport_tos": 0, 00:23:54.135 "nvme_error_stat": false, 00:23:54.135 "rdma_srq_size": 0, 00:23:54.135 "io_path_stat": false, 00:23:54.135 "allow_accel_sequence": false, 00:23:54.135 "rdma_max_cq_size": 0, 00:23:54.135 "rdma_cm_event_timeout_ms": 0, 00:23:54.135 "dhchap_digests": [ 00:23:54.135 "sha256", 00:23:54.135 "sha384", 00:23:54.135 "sha512" 00:23:54.135 ], 00:23:54.135 "dhchap_dhgroups": [ 00:23:54.135 "null", 00:23:54.135 "ffdhe2048", 00:23:54.135 "ffdhe3072", 00:23:54.135 "ffdhe4096", 00:23:54.135 "ffdhe6144", 00:23:54.135 "ffdhe8192" 00:23:54.135 ] 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "bdev_nvme_set_hotplug", 00:23:54.135 "params": { 00:23:54.135 "period_us": 100000, 00:23:54.135 "enable": false 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "bdev_malloc_create", 00:23:54.135 "params": { 00:23:54.135 "name": "malloc0", 00:23:54.135 "num_blocks": 8192, 00:23:54.135 "block_size": 4096, 00:23:54.135 "physical_block_size": 4096, 00:23:54.135 "uuid": "0870629a-8aed-46c9-b2bd-f5a979270be7", 00:23:54.135 "optimal_io_boundary": 0 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "bdev_wait_for_examine" 00:23:54.135 } 00:23:54.135 ] 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "subsystem": "nbd", 00:23:54.135 "config": [] 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "subsystem": "scheduler", 00:23:54.135 "config": [ 00:23:54.135 { 00:23:54.135 "method": "framework_set_scheduler", 00:23:54.135 "params": { 00:23:54.135 "name": "static" 00:23:54.135 } 00:23:54.135 } 00:23:54.135 ] 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "subsystem": "nvmf", 00:23:54.135 "config": [ 00:23:54.135 { 00:23:54.135 "method": "nvmf_set_config", 00:23:54.135 "params": { 00:23:54.135 "discovery_filter": "match_any", 00:23:54.135 "admin_cmd_passthru": { 00:23:54.135 "identify_ctrlr": false 00:23:54.135 } 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "nvmf_set_max_subsystems", 00:23:54.135 "params": { 00:23:54.135 "max_subsystems": 1024 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "nvmf_set_crdt", 00:23:54.135 "params": { 00:23:54.135 "crdt1": 0, 00:23:54.135 "crdt2": 0, 00:23:54.135 "crdt3": 0 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "nvmf_create_transport", 00:23:54.135 "params": { 00:23:54.135 "trtype": "TCP", 00:23:54.135 "max_queue_depth": 128, 00:23:54.135 "max_io_qpairs_per_ctrlr": 127, 00:23:54.135 "in_capsule_data_size": 4096, 00:23:54.135 "max_io_size": 131072, 00:23:54.135 "io_unit_size": 131072, 00:23:54.135 "max_aq_depth": 128, 00:23:54.135 "num_shared_buffers": 511, 00:23:54.135 "buf_cache_size": 4294967295, 00:23:54.135 "dif_insert_or_strip": false, 00:23:54.135 "zcopy": false, 00:23:54.135 "c2h_success": false, 00:23:54.135 "sock_priority": 0, 00:23:54.135 "abort_timeout_sec": 1, 00:23:54.135 "ack_timeout": 0, 00:23:54.135 "data_wr_pool_size": 0 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "nvmf_create_subsystem", 00:23:54.135 "params": { 00:23:54.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.135 "allow_any_host": false, 00:23:54.135 "serial_number": "00000000000000000000", 00:23:54.135 "model_number": "SPDK bdev Controller", 00:23:54.135 "max_namespaces": 32, 00:23:54.135 "min_cntlid": 1, 00:23:54.135 "max_cntlid": 65519, 00:23:54.135 "ana_reporting": false 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "nvmf_subsystem_add_host", 00:23:54.135 "params": { 00:23:54.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.135 "host": "nqn.2016-06.io.spdk:host1", 00:23:54.135 "psk": "key0" 00:23:54.135 } 00:23:54.135 }, 00:23:54.135 { 00:23:54.135 "method": "nvmf_subsystem_add_ns", 00:23:54.135 "params": { 00:23:54.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.136 "namespace": { 00:23:54.136 "nsid": 1, 00:23:54.136 "bdev_name": "malloc0", 00:23:54.136 "nguid": "0870629A8AED46C9B2BDF5A979270BE7", 00:23:54.136 "uuid": "0870629a-8aed-46c9-b2bd-f5a979270be7", 00:23:54.136 "no_auto_visible": false 00:23:54.136 } 00:23:54.136 } 00:23:54.136 }, 00:23:54.136 { 00:23:54.136 "method": "nvmf_subsystem_add_listener", 00:23:54.136 "params": { 00:23:54.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.136 "listen_address": { 00:23:54.136 "trtype": "TCP", 00:23:54.136 "adrfam": "IPv4", 00:23:54.136 "traddr": "10.0.0.2", 00:23:54.136 "trsvcid": "4420" 00:23:54.136 }, 00:23:54.136 "secure_channel": true 00:23:54.136 } 00:23:54.136 } 00:23:54.136 ] 00:23:54.136 } 00:23:54.136 ] 00:23:54.136 }' 00:23:54.136 03:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:54.395 "subsystems": [ 00:23:54.395 { 00:23:54.395 "subsystem": "keyring", 00:23:54.395 "config": [ 00:23:54.395 { 00:23:54.395 "method": "keyring_file_add_key", 00:23:54.395 "params": { 00:23:54.395 "name": "key0", 00:23:54.395 "path": "/tmp/tmp.okA4zgB3py" 00:23:54.395 } 00:23:54.395 } 00:23:54.395 ] 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "subsystem": "iobuf", 00:23:54.395 "config": [ 00:23:54.395 { 00:23:54.395 "method": "iobuf_set_options", 00:23:54.395 "params": { 00:23:54.395 "small_pool_count": 8192, 00:23:54.395 "large_pool_count": 1024, 00:23:54.395 "small_bufsize": 8192, 00:23:54.395 "large_bufsize": 135168 00:23:54.395 } 00:23:54.395 } 00:23:54.395 ] 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "subsystem": "sock", 00:23:54.395 "config": [ 00:23:54.395 { 00:23:54.395 "method": "sock_set_default_impl", 00:23:54.395 "params": { 00:23:54.395 "impl_name": "posix" 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "sock_impl_set_options", 00:23:54.395 "params": { 00:23:54.395 "impl_name": "ssl", 00:23:54.395 "recv_buf_size": 4096, 00:23:54.395 "send_buf_size": 4096, 00:23:54.395 "enable_recv_pipe": true, 00:23:54.395 "enable_quickack": false, 00:23:54.395 "enable_placement_id": 0, 00:23:54.395 "enable_zerocopy_send_server": true, 00:23:54.395 "enable_zerocopy_send_client": false, 00:23:54.395 "zerocopy_threshold": 0, 00:23:54.395 "tls_version": 0, 00:23:54.395 "enable_ktls": false 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "sock_impl_set_options", 00:23:54.395 "params": { 00:23:54.395 "impl_name": "posix", 00:23:54.395 "recv_buf_size": 2097152, 00:23:54.395 "send_buf_size": 2097152, 00:23:54.395 "enable_recv_pipe": true, 00:23:54.395 "enable_quickack": false, 00:23:54.395 "enable_placement_id": 0, 00:23:54.395 "enable_zerocopy_send_server": true, 00:23:54.395 "enable_zerocopy_send_client": false, 00:23:54.395 "zerocopy_threshold": 0, 00:23:54.395 "tls_version": 0, 00:23:54.395 "enable_ktls": false 00:23:54.395 } 00:23:54.395 } 00:23:54.395 ] 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "subsystem": "vmd", 00:23:54.395 "config": [] 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "subsystem": "accel", 00:23:54.395 "config": [ 00:23:54.395 { 00:23:54.395 "method": "accel_set_options", 00:23:54.395 "params": { 00:23:54.395 "small_cache_size": 128, 00:23:54.395 "large_cache_size": 16, 00:23:54.395 "task_count": 2048, 00:23:54.395 "sequence_count": 2048, 00:23:54.395 "buf_count": 2048 00:23:54.395 } 00:23:54.395 } 00:23:54.395 ] 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "subsystem": "bdev", 00:23:54.395 "config": [ 00:23:54.395 { 00:23:54.395 "method": "bdev_set_options", 00:23:54.395 "params": { 00:23:54.395 "bdev_io_pool_size": 65535, 00:23:54.395 "bdev_io_cache_size": 256, 00:23:54.395 "bdev_auto_examine": true, 00:23:54.395 "iobuf_small_cache_size": 128, 00:23:54.395 "iobuf_large_cache_size": 16 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "bdev_raid_set_options", 00:23:54.395 "params": { 00:23:54.395 "process_window_size_kb": 1024 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "bdev_iscsi_set_options", 00:23:54.395 "params": { 00:23:54.395 "timeout_sec": 30 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "bdev_nvme_set_options", 00:23:54.395 "params": { 00:23:54.395 "action_on_timeout": "none", 00:23:54.395 "timeout_us": 0, 00:23:54.395 "timeout_admin_us": 0, 00:23:54.395 "keep_alive_timeout_ms": 10000, 00:23:54.395 "arbitration_burst": 0, 00:23:54.395 "low_priority_weight": 0, 00:23:54.395 "medium_priority_weight": 0, 00:23:54.395 "high_priority_weight": 0, 00:23:54.395 "nvme_adminq_poll_period_us": 10000, 00:23:54.395 "nvme_ioq_poll_period_us": 0, 00:23:54.395 "io_queue_requests": 512, 00:23:54.395 "delay_cmd_submit": true, 00:23:54.395 "transport_retry_count": 4, 00:23:54.395 "bdev_retry_count": 3, 00:23:54.395 "transport_ack_timeout": 0, 00:23:54.395 "ctrlr_loss_timeout_sec": 0, 00:23:54.395 "reconnect_delay_sec": 0, 00:23:54.395 "fast_io_fail_timeout_sec": 0, 00:23:54.395 "disable_auto_failback": false, 00:23:54.395 "generate_uuids": false, 00:23:54.395 "transport_tos": 0, 00:23:54.395 "nvme_error_stat": false, 00:23:54.395 "rdma_srq_size": 0, 00:23:54.395 "io_path_stat": false, 00:23:54.395 "allow_accel_sequence": false, 00:23:54.395 "rdma_max_cq_size": 0, 00:23:54.395 "rdma_cm_event_timeout_ms": 0, 00:23:54.395 "dhchap_digests": [ 00:23:54.395 "sha256", 00:23:54.395 "sha384", 00:23:54.395 "sha512" 00:23:54.395 ], 00:23:54.395 "dhchap_dhgroups": [ 00:23:54.395 "null", 00:23:54.395 "ffdhe2048", 00:23:54.395 "ffdhe3072", 00:23:54.395 "ffdhe4096", 00:23:54.395 "ffdhe6144", 00:23:54.395 "ffdhe8192" 00:23:54.395 ] 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "bdev_nvme_attach_controller", 00:23:54.395 "params": { 00:23:54.395 "name": "nvme0", 00:23:54.395 "trtype": "TCP", 00:23:54.395 "adrfam": "IPv4", 00:23:54.395 "traddr": "10.0.0.2", 00:23:54.395 "trsvcid": "4420", 00:23:54.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.395 "prchk_reftag": false, 00:23:54.395 "prchk_guard": false, 00:23:54.395 "ctrlr_loss_timeout_sec": 0, 00:23:54.395 "reconnect_delay_sec": 0, 00:23:54.395 "fast_io_fail_timeout_sec": 0, 00:23:54.395 "psk": "key0", 00:23:54.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.395 "hdgst": false, 00:23:54.395 "ddgst": false 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "bdev_nvme_set_hotplug", 00:23:54.395 "params": { 00:23:54.395 "period_us": 100000, 00:23:54.395 "enable": false 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "bdev_enable_histogram", 00:23:54.395 "params": { 00:23:54.395 "name": "nvme0n1", 00:23:54.395 "enable": true 00:23:54.395 } 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "method": "bdev_wait_for_examine" 00:23:54.395 } 00:23:54.395 ] 00:23:54.395 }, 00:23:54.395 { 00:23:54.395 "subsystem": "nbd", 00:23:54.395 "config": [] 00:23:54.395 } 00:23:54.395 ] 00:23:54.395 }' 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 483149 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 483149 ']' 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 483149 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 483149 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 483149' 00:23:54.395 killing process with pid 483149 00:23:54.395 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 483149 00:23:54.395 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.395 00:23:54.395 Latency(us) 00:23:54.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.396 =================================================================================================================== 00:23:54.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.396 03:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 483149 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 483006 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 483006 ']' 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 483006 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 483006 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 483006' 00:23:54.654 killing process with pid 483006 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 483006 00:23:54.654 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 483006 00:23:54.913 03:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:54.913 03:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:54.913 "subsystems": [ 00:23:54.913 { 00:23:54.913 "subsystem": "keyring", 00:23:54.913 "config": [ 00:23:54.913 { 00:23:54.913 "method": "keyring_file_add_key", 00:23:54.913 "params": { 00:23:54.913 "name": "key0", 00:23:54.913 "path": "/tmp/tmp.okA4zgB3py" 00:23:54.913 } 00:23:54.913 } 00:23:54.913 ] 00:23:54.913 }, 00:23:54.913 { 00:23:54.913 "subsystem": "iobuf", 00:23:54.913 "config": [ 00:23:54.913 { 00:23:54.913 "method": "iobuf_set_options", 00:23:54.913 "params": { 00:23:54.913 "small_pool_count": 8192, 00:23:54.913 "large_pool_count": 1024, 00:23:54.913 "small_bufsize": 8192, 00:23:54.913 "large_bufsize": 135168 00:23:54.913 } 00:23:54.913 } 00:23:54.913 ] 00:23:54.913 }, 00:23:54.913 { 00:23:54.913 "subsystem": "sock", 00:23:54.913 "config": [ 00:23:54.913 { 00:23:54.913 "method": "sock_set_default_impl", 00:23:54.913 "params": { 00:23:54.913 "impl_name": "posix" 00:23:54.913 } 00:23:54.913 }, 00:23:54.913 { 00:23:54.913 "method": "sock_impl_set_options", 00:23:54.913 "params": { 00:23:54.913 "impl_name": "ssl", 00:23:54.913 "recv_buf_size": 4096, 00:23:54.913 "send_buf_size": 4096, 00:23:54.913 "enable_recv_pipe": true, 00:23:54.913 "enable_quickack": false, 00:23:54.913 "enable_placement_id": 0, 00:23:54.913 "enable_zerocopy_send_server": true, 00:23:54.913 "enable_zerocopy_send_client": false, 00:23:54.913 "zerocopy_threshold": 0, 00:23:54.913 "tls_version": 0, 00:23:54.913 "enable_ktls": false 00:23:54.913 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "sock_impl_set_options", 00:23:54.914 "params": { 00:23:54.914 "impl_name": "posix", 00:23:54.914 "recv_buf_size": 2097152, 00:23:54.914 "send_buf_size": 2097152, 00:23:54.914 "enable_recv_pipe": true, 00:23:54.914 "enable_quickack": false, 00:23:54.914 "enable_placement_id": 0, 00:23:54.914 "enable_zerocopy_send_server": true, 00:23:54.914 "enable_zerocopy_send_client": false, 00:23:54.914 "zerocopy_threshold": 0, 00:23:54.914 "tls_version": 0, 00:23:54.914 "enable_ktls": false 00:23:54.914 } 00:23:54.914 } 00:23:54.914 ] 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "subsystem": "vmd", 00:23:54.914 "config": [] 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "subsystem": "accel", 00:23:54.914 "config": [ 00:23:54.914 { 00:23:54.914 "method": "accel_set_options", 00:23:54.914 "params": { 00:23:54.914 "small_cache_size": 128, 00:23:54.914 "large_cache_size": 16, 00:23:54.914 "task_count": 2048, 00:23:54.914 "sequence_count": 2048, 00:23:54.914 "buf_count": 2048 00:23:54.914 } 00:23:54.914 } 00:23:54.914 ] 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "subsystem": "bdev", 00:23:54.914 "config": [ 00:23:54.914 { 00:23:54.914 "method": "bdev_set_options", 00:23:54.914 "params": { 00:23:54.914 "bdev_io_pool_size": 65535, 00:23:54.914 "bdev_io_cache_size": 256, 00:23:54.914 "bdev_auto_examine": true, 00:23:54.914 "iobuf_small_cache_size": 128, 00:23:54.914 "iobuf_large_cache_size": 16 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "bdev_raid_set_options", 00:23:54.914 "params": { 00:23:54.914 "process_window_size_kb": 1024 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "bdev_iscsi_set_options", 00:23:54.914 "params": { 00:23:54.914 "timeout_sec": 30 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "bdev_nvme_set_options", 00:23:54.914 "params": { 00:23:54.914 "action_on_timeout": "none", 00:23:54.914 "timeout_us": 0, 00:23:54.914 "timeout_admin_us": 0, 00:23:54.914 "keep_alive_timeout_ms": 10000, 00:23:54.914 "arbitration_burst": 0, 00:23:54.914 "low_priority_weight": 0, 00:23:54.914 "medium_priority_weight": 0, 00:23:54.914 "high_priority_weight": 0, 00:23:54.914 "nvme_adminq_poll_period_us": 10000, 00:23:54.914 "nvme_ioq_poll_period_us": 0, 00:23:54.914 "io_queue_requests": 0, 00:23:54.914 "delay_cmd_submit": true, 00:23:54.914 "transport_retry_count": 4, 00:23:54.914 "bdev_retry_count": 3, 00:23:54.914 "transport_ack_timeout": 0, 00:23:54.914 "ctrlr_loss_timeout_sec": 0, 00:23:54.914 "reconnect_delay_sec": 0, 00:23:54.914 "fast_io_fail_timeout_sec": 0, 00:23:54.914 "disable_auto_failback": false, 00:23:54.914 "generate_uuids": false, 00:23:54.914 "transport_tos": 0, 00:23:54.914 "nvme_error_stat": false, 00:23:54.914 "rdma_srq_size": 0, 00:23:54.914 "io_path_stat": false, 00:23:54.914 "allow_accel_sequence": false, 00:23:54.914 "rdma_max_cq_size": 0, 00:23:54.914 "rdma_cm_event_timeout_ms": 0, 00:23:54.914 "dhchap_digests": [ 00:23:54.914 "sha256", 00:23:54.914 "sha384", 00:23:54.914 "sha512" 00:23:54.914 ], 00:23:54.914 "dhchap_dhgroups": [ 00:23:54.914 "null", 00:23:54.914 "ffdhe2048", 00:23:54.914 "ffdhe3072", 00:23:54.914 "ffdhe4096", 00:23:54.914 "ffdhe6144", 00:23:54.914 "ffdhe8192" 00:23:54.914 ] 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "bdev_nvme_set_hotplug", 00:23:54.914 "params": { 00:23:54.914 "period_us": 100000, 00:23:54.914 "enable": false 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "bdev_malloc_create", 00:23:54.914 "params": { 00:23:54.914 "name": "malloc0", 00:23:54.914 "num_blocks": 8192, 00:23:54.914 "block_size": 4096, 00:23:54.914 "physical_block_size": 4096, 00:23:54.914 "uuid": "0870629a-8aed-46c9-b2bd-f5a979270be7", 00:23:54.914 "optimal_io_boundary": 0 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "bdev_wait_for_examine" 00:23:54.914 } 00:23:54.914 ] 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "subsystem": "nbd", 00:23:54.914 "config": [] 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "subsystem": "scheduler", 00:23:54.914 "config": [ 00:23:54.914 { 00:23:54.914 "method": "framework_set_scheduler", 00:23:54.914 "params": { 00:23:54.914 "name": "static" 00:23:54.914 } 00:23:54.914 } 00:23:54.914 ] 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "subsystem": "nvmf", 00:23:54.914 "config": [ 00:23:54.914 { 00:23:54.914 "method": "nvmf_set_config", 00:23:54.914 "params": { 00:23:54.914 "discovery_filter": "match_any", 00:23:54.914 "admin_cmd_passthru": { 00:23:54.914 "identify_ctrlr": false 00:23:54.914 } 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "nvmf_set_max_subsystems", 00:23:54.914 "params": { 00:23:54.914 "max_subsystems": 1024 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "nvmf_set_crdt", 00:23:54.914 "params": { 00:23:54.914 "crdt1": 0, 00:23:54.914 "crdt2": 0, 00:23:54.914 "crdt3": 0 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "nvmf_create_transport", 00:23:54.914 "params": { 00:23:54.914 "trtype": "TCP", 00:23:54.914 "max_queue_depth": 128, 00:23:54.914 "max_io_qpairs_per_ctrlr": 127, 00:23:54.914 "in_capsule_data_size": 4096, 00:23:54.914 "max_io_size": 131072, 00:23:54.914 "io_unit_size": 131072, 00:23:54.914 "max_aq_depth": 128, 00:23:54.914 "num_shared_buffers": 511, 00:23:54.914 "buf_cache_size": 4294967295, 00:23:54.914 "dif_insert_or_strip": false, 00:23:54.914 "zcopy": false, 00:23:54.914 "c2h_success": false, 00:23:54.914 "sock_priority": 0, 00:23:54.914 "abort_timeout_sec": 1, 00:23:54.914 "ack_timeout": 0, 00:23:54.914 "data_wr_pool_size": 0 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "nvmf_create_subsystem", 00:23:54.914 "params": { 00:23:54.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.914 "allow_any_host": false, 00:23:54.914 "serial_number": "00000000000000000000", 00:23:54.914 "model_number": "SPDK bdev Controller", 00:23:54.914 "max_namespaces": 32, 00:23:54.914 "min_cntlid": 1, 00:23:54.914 "max_cntlid": 65519, 00:23:54.914 "ana_reporting": false 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "nvmf_subsystem_add_host", 00:23:54.914 "params": { 00:23:54.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.914 "host": "nqn.2016-06.io.spdk:host1", 00:23:54.914 "psk": "key0" 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "nvmf_subsystem_add_ns", 00:23:54.914 "params": { 00:23:54.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.914 "namespace": { 00:23:54.914 "nsid": 1, 00:23:54.914 "bdev_name": "malloc0", 00:23:54.914 "nguid": "0870629A8AED46C9B2BDF5A979270BE7", 00:23:54.914 "uuid": "0870629a-8aed-46c9-b2bd-f5a979270be7", 00:23:54.914 "no_auto_visible": false 00:23:54.914 } 00:23:54.914 } 00:23:54.914 }, 00:23:54.914 { 00:23:54.914 "method": "nvmf_subsystem_add_listener", 00:23:54.914 "params": { 00:23:54.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.914 "listen_address": { 00:23:54.914 "trtype": "TCP", 00:23:54.914 "adrfam": "IPv4", 00:23:54.914 "traddr": "10.0.0.2", 00:23:54.914 "trsvcid": "4420" 00:23:54.914 }, 00:23:54.914 "secure_channel": true 00:23:54.914 } 00:23:54.914 } 00:23:54.914 ] 00:23:54.914 } 00:23:54.914 ] 00:23:54.914 }' 00:23:54.914 03:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.914 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:54.914 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.914 03:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=483459 00:23:54.914 03:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:54.914 03:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 483459 00:23:54.914 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 483459 ']' 00:23:54.914 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.915 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:54.915 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.915 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:54.915 03:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.915 [2024-07-23 03:23:21.443953] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:54.915 [2024-07-23 03:23:21.444030] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.915 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.173 [2024-07-23 03:23:21.508701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.173 [2024-07-23 03:23:21.595974] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.173 [2024-07-23 03:23:21.596050] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.173 [2024-07-23 03:23:21.596079] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.173 [2024-07-23 03:23:21.596090] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.173 [2024-07-23 03:23:21.596100] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.173 [2024-07-23 03:23:21.596178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.432 [2024-07-23 03:23:21.840799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.432 [2024-07-23 03:23:21.872798] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.432 [2024-07-23 03:23:21.880812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=483595 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 483595 /var/tmp/bdevperf.sock 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 483595 ']' 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:55.999 03:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:55.999 "subsystems": [ 00:23:55.999 { 00:23:55.999 "subsystem": "keyring", 00:23:55.999 "config": [ 00:23:55.999 { 00:23:55.999 "method": "keyring_file_add_key", 00:23:55.999 "params": { 00:23:55.999 "name": "key0", 00:23:55.999 "path": "/tmp/tmp.okA4zgB3py" 00:23:55.999 } 00:23:55.999 } 00:23:55.999 ] 00:23:55.999 }, 00:23:55.999 { 00:23:55.999 "subsystem": "iobuf", 00:23:55.999 "config": [ 00:23:55.999 { 00:23:55.999 "method": "iobuf_set_options", 00:23:55.999 "params": { 00:23:55.999 "small_pool_count": 8192, 00:23:55.999 "large_pool_count": 1024, 00:23:55.999 "small_bufsize": 8192, 00:23:55.999 "large_bufsize": 135168 00:23:55.999 } 00:23:55.999 } 00:23:55.999 ] 00:23:55.999 }, 00:23:55.999 { 00:23:55.999 "subsystem": "sock", 00:23:55.999 "config": [ 00:23:55.999 { 00:23:55.999 "method": "sock_set_default_impl", 00:23:55.999 "params": { 00:23:55.999 "impl_name": "posix" 00:23:55.999 } 00:23:55.999 }, 00:23:55.999 { 00:23:55.999 "method": "sock_impl_set_options", 00:23:55.999 "params": { 00:23:55.999 "impl_name": "ssl", 00:23:55.999 "recv_buf_size": 4096, 00:23:55.999 "send_buf_size": 4096, 00:23:55.999 "enable_recv_pipe": true, 00:23:55.999 "enable_quickack": false, 00:23:55.999 "enable_placement_id": 0, 00:23:55.999 "enable_zerocopy_send_server": true, 00:23:55.999 "enable_zerocopy_send_client": false, 00:23:55.999 "zerocopy_threshold": 0, 00:23:55.999 "tls_version": 0, 00:23:55.999 "enable_ktls": false 00:23:55.999 } 00:23:55.999 }, 00:23:55.999 { 00:23:55.999 "method": "sock_impl_set_options", 00:23:55.999 "params": { 00:23:56.000 "impl_name": "posix", 00:23:56.000 "recv_buf_size": 2097152, 00:23:56.000 "send_buf_size": 2097152, 00:23:56.000 "enable_recv_pipe": true, 00:23:56.000 "enable_quickack": false, 00:23:56.000 "enable_placement_id": 0, 00:23:56.000 "enable_zerocopy_send_server": true, 00:23:56.000 "enable_zerocopy_send_client": false, 00:23:56.000 "zerocopy_threshold": 0, 00:23:56.000 "tls_version": 0, 00:23:56.000 "enable_ktls": false 00:23:56.000 } 00:23:56.000 } 00:23:56.000 ] 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "subsystem": "vmd", 00:23:56.000 "config": [] 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "subsystem": "accel", 00:23:56.000 "config": [ 00:23:56.000 { 00:23:56.000 "method": "accel_set_options", 00:23:56.000 "params": { 00:23:56.000 "small_cache_size": 128, 00:23:56.000 "large_cache_size": 16, 00:23:56.000 "task_count": 2048, 00:23:56.000 "sequence_count": 2048, 00:23:56.000 "buf_count": 2048 00:23:56.000 } 00:23:56.000 } 00:23:56.000 ] 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "subsystem": "bdev", 00:23:56.000 "config": [ 00:23:56.000 { 00:23:56.000 "method": "bdev_set_options", 00:23:56.000 "params": { 00:23:56.000 "bdev_io_pool_size": 65535, 00:23:56.000 "bdev_io_cache_size": 256, 00:23:56.000 "bdev_auto_examine": true, 00:23:56.000 "iobuf_small_cache_size": 128, 00:23:56.000 "iobuf_large_cache_size": 16 00:23:56.000 } 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "method": "bdev_raid_set_options", 00:23:56.000 "params": { 00:23:56.000 "process_window_size_kb": 1024 00:23:56.000 } 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "method": "bdev_iscsi_set_options", 00:23:56.000 "params": { 00:23:56.000 "timeout_sec": 30 00:23:56.000 } 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "method": "bdev_nvme_set_options", 00:23:56.000 "params": { 00:23:56.000 "action_on_timeout": "none", 00:23:56.000 "timeout_us": 0, 00:23:56.000 "timeout_admin_us": 0, 00:23:56.000 "keep_alive_timeout_ms": 10000, 00:23:56.000 "arbitration_burst": 0, 00:23:56.000 "low_priority_weight": 0, 00:23:56.000 "medium_priority_weight": 0, 00:23:56.000 "high_priority_weight": 0, 00:23:56.000 "nvme_adminq_poll_period_us": 10000, 00:23:56.000 "nvme_ioq_poll_period_us": 0, 00:23:56.000 "io_queue_requests": 512, 00:23:56.000 "delay_cmd_submit": true, 00:23:56.000 "transport_retry_count": 4, 00:23:56.000 "bdev_retry_count": 3, 00:23:56.000 "transport_ack_timeout": 0, 00:23:56.000 "ctrlr_loss_timeout_sec": 0, 00:23:56.000 "reconnect_delay_sec": 0, 00:23:56.000 "fast_io_fail_timeout_sec": 0, 00:23:56.000 "disable_auto_failback": false, 00:23:56.000 "generate_uuids": false, 00:23:56.000 "transport_tos": 0, 00:23:56.000 "nvme_error_stat": false, 00:23:56.000 "rdma_srq_size": 0, 00:23:56.000 "io_path_stat": false, 00:23:56.000 "allow_accel_sequence": false, 00:23:56.000 "rdma_max_cq_size": 0, 00:23:56.000 "rdma_cm_event_timeout_ms": 0, 00:23:56.000 "dhchap_digests": [ 00:23:56.000 "sha256", 00:23:56.000 "sha384", 00:23:56.000 "sha512" 00:23:56.000 ], 00:23:56.000 "dhchap_dhgroups": [ 00:23:56.000 "null", 00:23:56.000 "ffdhe2048", 00:23:56.000 "ffdhe3072", 00:23:56.000 "ffdhe4096", 00:23:56.000 "ffdhe6144", 00:23:56.000 "ffdhe8192" 00:23:56.000 ] 00:23:56.000 } 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "method": "bdev_nvme_attach_controller", 00:23:56.000 "params": { 00:23:56.000 "name": "nvme0", 00:23:56.000 "trtype": "TCP", 00:23:56.000 "adrfam": "IPv4", 00:23:56.000 "traddr": "10.0.0.2", 00:23:56.000 "trsvcid": "4420", 00:23:56.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.000 "prchk_reftag": false, 00:23:56.000 "prchk_guard": false, 00:23:56.000 "ctrlr_loss_timeout_sec": 0, 00:23:56.000 "reconnect_delay_sec": 0, 00:23:56.000 "fast_io_fail_timeout_sec": 0, 00:23:56.000 "psk": "key0", 00:23:56.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.000 "hdgst": false, 00:23:56.000 "ddgst": false 00:23:56.000 } 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "method": "bdev_nvme_set_hotplug", 00:23:56.000 "params": { 00:23:56.000 "period_us": 100000, 00:23:56.000 "enable": false 00:23:56.000 } 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "method": "bdev_enable_histogram", 00:23:56.000 "params": { 00:23:56.000 "name": "nvme0n1", 00:23:56.000 "enable": true 00:23:56.000 } 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "method": "bdev_wait_for_examine" 00:23:56.000 } 00:23:56.000 ] 00:23:56.000 }, 00:23:56.000 { 00:23:56.000 "subsystem": "nbd", 00:23:56.000 "config": [] 00:23:56.000 } 00:23:56.000 ] 00:23:56.000 }' 00:23:56.000 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.000 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:56.000 03:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.000 [2024-07-23 03:23:22.499018] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:56.000 [2024-07-23 03:23:22.499093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483595 ] 00:23:56.000 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.000 [2024-07-23 03:23:22.561338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.259 [2024-07-23 03:23:22.650577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.259 [2024-07-23 03:23:22.829558] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.193 03:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.193 03:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:57.193 03:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.193 03:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:57.193 03:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.193 03:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.450 Running I/O for 1 seconds... 00:23:58.413 00:23:58.413 Latency(us) 00:23:58.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.413 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.413 Verification LBA range: start 0x0 length 0x2000 00:23:58.413 nvme0n1 : 1.06 2122.01 8.29 0.00 0.00 58935.28 7912.87 86604.61 00:23:58.413 =================================================================================================================== 00:23:58.413 Total : 2122.01 8.29 0.00 0.00 58935.28 7912.87 86604.61 00:23:58.413 0 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:58.413 nvmf_trace.0 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 483595 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 483595 ']' 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 483595 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:58.413 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 483595 00:23:58.672 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:58.672 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:58.672 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 483595' 00:23:58.672 killing process with pid 483595 00:23:58.672 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 483595 00:23:58.672 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.672 00:23:58.672 Latency(us) 00:23:58.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.672 =================================================================================================================== 00:23:58.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.672 03:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 483595 00:23:58.672 03:23:25 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:58.672 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:58.672 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:58.672 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.672 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:58.672 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.672 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.672 rmmod nvme_tcp 00:23:58.931 rmmod nvme_fabrics 00:23:58.931 rmmod nvme_keyring 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 483459 ']' 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 483459 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 483459 ']' 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 483459 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 483459 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 483459' 00:23:58.931 killing process with pid 483459 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 483459 00:23:58.931 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 483459 00:23:59.191 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.191 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.191 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.191 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.191 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.191 03:23:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.191 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.191 03:23:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.096 03:23:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.096 03:23:27 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RUANOxMY3K /tmp/tmp.Inx2iN57bI /tmp/tmp.okA4zgB3py 00:24:01.096 00:24:01.096 real 1m19.631s 00:24:01.096 user 1m59.573s 00:24:01.096 sys 0m27.846s 00:24:01.096 03:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:01.096 03:23:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.096 ************************************ 00:24:01.096 END TEST nvmf_tls 00:24:01.096 ************************************ 00:24:01.096 03:23:27 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:01.096 03:23:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:01.096 03:23:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:01.096 03:23:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.096 ************************************ 00:24:01.096 START TEST nvmf_fips 00:24:01.096 ************************************ 00:24:01.096 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:01.356 * Looking for test storage... 00:24:01.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:01.356 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:01.357 Error setting digest 00:24:01.357 00A242A4347F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:01.357 00A242A4347F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.357 03:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.260 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:03.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:03.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:03.261 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:03.261 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.261 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:24:03.520 00:24:03.520 --- 10.0.0.2 ping statistics --- 00:24:03.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.520 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:24:03.520 00:24:03.520 --- 10.0.0.1 ping statistics --- 00:24:03.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.520 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=485948 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 485948 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 485948 ']' 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.520 03:23:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.520 [2024-07-23 03:23:30.002171] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:03.520 [2024-07-23 03:23:30.002300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.520 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.520 [2024-07-23 03:23:30.074473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.779 [2024-07-23 03:23:30.162975] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.779 [2024-07-23 03:23:30.163028] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.779 [2024-07-23 03:23:30.163043] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.779 [2024-07-23 03:23:30.163055] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.779 [2024-07-23 03:23:30.163066] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.779 [2024-07-23 03:23:30.163091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:03.779 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:04.038 [2024-07-23 03:23:30.534362] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.038 [2024-07-23 03:23:30.550385] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.038 [2024-07-23 03:23:30.550624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.038 [2024-07-23 03:23:30.582498] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:04.038 malloc0 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=486059 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 486059 /var/tmp/bdevperf.sock 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 486059 ']' 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:04.038 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:04.297 [2024-07-23 03:23:30.672646] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:04.297 [2024-07-23 03:23:30.672731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486059 ] 00:24:04.297 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.297 [2024-07-23 03:23:30.729650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.297 [2024-07-23 03:23:30.812626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.556 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:04.556 03:23:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:24:04.556 03:23:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:04.814 [2024-07-23 03:23:31.188895] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.814 [2024-07-23 03:23:31.189015] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:04.814 TLSTESTn1 00:24:04.814 03:23:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:04.814 Running I/O for 10 seconds... 00:24:17.014 00:24:17.014 Latency(us) 00:24:17.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.014 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:17.014 Verification LBA range: start 0x0 length 0x2000 00:24:17.014 TLSTESTn1 : 10.06 2144.95 8.38 0.00 0.00 59510.23 10000.31 88158.06 00:24:17.014 =================================================================================================================== 00:24:17.014 Total : 2144.95 8.38 0.00 0.00 59510.23 10000.31 88158.06 00:24:17.014 0 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:17.014 nvmf_trace.0 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 486059 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 486059 ']' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 486059 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 486059 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 486059' 00:24:17.014 killing process with pid 486059 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 486059 00:24:17.014 Received shutdown signal, test time was about 10.000000 seconds 00:24:17.014 00:24:17.014 Latency(us) 00:24:17.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.014 =================================================================================================================== 00:24:17.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.014 [2024-07-23 03:23:41.556043] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 486059 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.014 rmmod nvme_tcp 00:24:17.014 rmmod nvme_fabrics 00:24:17.014 rmmod nvme_keyring 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 485948 ']' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 485948 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 485948 ']' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 485948 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 485948 00:24:17.014 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:17.015 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:17.015 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 485948' 00:24:17.015 killing process with pid 485948 00:24:17.015 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 485948 00:24:17.015 [2024-07-23 03:23:41.843883] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:17.015 03:23:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 485948 00:24:17.015 03:23:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:17.015 03:23:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:17.015 03:23:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:17.015 03:23:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.015 03:23:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.015 03:23:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.015 03:23:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.015 03:23:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.583 03:23:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:17.583 03:23:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:17.583 00:24:17.583 real 0m16.474s 00:24:17.583 user 0m18.043s 00:24:17.583 sys 0m7.103s 00:24:17.583 03:23:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:17.583 03:23:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 ************************************ 00:24:17.583 END TEST nvmf_fips 00:24:17.583 ************************************ 00:24:17.583 03:23:44 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:17.583 03:23:44 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:17.842 03:23:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:17.842 03:23:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:17.842 03:23:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.842 ************************************ 00:24:17.842 START TEST nvmf_fuzz 00:24:17.842 ************************************ 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:17.842 * Looking for test storage... 00:24:17.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:17.842 03:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:19.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:19.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:19.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:19.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.747 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:20.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:24:20.006 00:24:20.006 --- 10.0.0.2 ping statistics --- 00:24:20.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.006 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:24:20.006 00:24:20.006 --- 10.0.0.1 ping statistics --- 00:24:20.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.006 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=489228 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 489228 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 489228 ']' 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:20.006 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.265 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:20.524 Malloc0 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:20.524 03:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:52.623 Fuzzing completed. Shutting down the fuzz application 00:24:52.623 00:24:52.623 Dumping successful admin opcodes: 00:24:52.623 8, 9, 10, 24, 00:24:52.623 Dumping successful io opcodes: 00:24:52.623 0, 9, 00:24:52.623 NS: 0x200003aeff00 I/O qp, Total commands completed: 445909, total successful commands: 2588, random_seed: 456906112 00:24:52.623 NS: 0x200003aeff00 admin qp, Total commands completed: 55056, total successful commands: 440, random_seed: 1168277440 00:24:52.623 03:24:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:52.623 Fuzzing completed. Shutting down the fuzz application 00:24:52.623 00:24:52.623 Dumping successful admin opcodes: 00:24:52.623 24, 00:24:52.623 Dumping successful io opcodes: 00:24:52.623 00:24:52.623 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1282454235 00:24:52.623 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1282600492 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.623 03:24:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.623 rmmod nvme_tcp 00:24:52.623 rmmod nvme_fabrics 00:24:52.623 rmmod nvme_keyring 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 489228 ']' 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 489228 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 489228 ']' 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 489228 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 489228 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 489228' 00:24:52.623 killing process with pid 489228 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 489228 00:24:52.623 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 489228 00:24:52.882 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:52.882 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:52.882 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:52.882 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.882 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:52.882 03:24:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.882 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.882 03:24:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.415 03:24:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:55.415 03:24:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:55.415 00:24:55.415 real 0m37.206s 00:24:55.415 user 0m50.944s 00:24:55.415 sys 0m15.506s 00:24:55.415 03:24:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:55.415 03:24:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:55.415 ************************************ 00:24:55.415 END TEST nvmf_fuzz 00:24:55.415 ************************************ 00:24:55.415 03:24:21 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:55.415 03:24:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:55.415 03:24:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:55.415 03:24:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:55.415 ************************************ 00:24:55.415 START TEST nvmf_multiconnection 00:24:55.415 ************************************ 00:24:55.415 03:24:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:55.415 * Looking for test storage... 00:24:55.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:55.415 03:24:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.415 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:55.415 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.415 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.415 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.415 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.415 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:55.416 03:24:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.318 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:57.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:57.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:57.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:57.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:24:57.319 00:24:57.319 --- 10.0.0.2 ping statistics --- 00:24:57.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.319 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:57.319 00:24:57.319 --- 10.0.0.1 ping statistics --- 00:24:57.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.319 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=495571 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 495571 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 495571 ']' 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:57.319 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.319 [2024-07-23 03:24:23.655345] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:57.319 [2024-07-23 03:24:23.655428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.319 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.319 [2024-07-23 03:24:23.725445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.319 [2024-07-23 03:24:23.818921] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.319 [2024-07-23 03:24:23.819004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.320 [2024-07-23 03:24:23.819030] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.320 [2024-07-23 03:24:23.819043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.320 [2024-07-23 03:24:23.819054] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.320 [2024-07-23 03:24:23.822638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.320 [2024-07-23 03:24:23.822690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.320 [2024-07-23 03:24:23.822788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.320 [2024-07-23 03:24:23.822791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 [2024-07-23 03:24:23.956176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 Malloc1 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.578 03:24:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 [2024-07-23 03:24:24.011190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 Malloc2 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.578 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 Malloc3 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 Malloc4 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.579 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.838 Malloc5 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.838 Malloc6 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.838 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 Malloc7 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 Malloc8 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 Malloc9 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.839 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.098 Malloc10 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.098 Malloc11 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.098 03:24:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:58.664 03:24:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:58.664 03:24:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:58.664 03:24:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.664 03:24:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:58.664 03:24:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:00.563 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:00.563 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:00.563 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:25:00.563 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:00.563 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.563 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:00.563 03:24:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.563 03:24:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:01.497 03:24:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:01.497 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:01.497 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.497 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:01.497 03:24:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:03.395 03:24:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:03.395 03:24:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:03.395 03:24:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:25:03.395 03:24:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:03.395 03:24:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.395 03:24:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:03.395 03:24:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.396 03:24:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:03.962 03:24:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:03.962 03:24:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:03.962 03:24:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:03.962 03:24:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:03.962 03:24:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:06.490 03:24:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:06.490 03:24:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:06.490 03:24:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:25:06.490 03:24:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:06.490 03:24:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.490 03:24:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:06.490 03:24:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.490 03:24:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:06.748 03:24:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:06.748 03:24:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:06.748 03:24:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.748 03:24:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:06.748 03:24:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:08.645 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:08.645 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:08.645 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:25:08.645 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:08.645 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:08.645 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:08.645 03:24:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.645 03:24:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:09.578 03:24:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:09.578 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:09.578 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.578 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:09.578 03:24:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:11.476 03:24:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:11.476 03:24:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:11.476 03:24:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:25:11.476 03:24:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:11.476 03:24:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.476 03:24:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:11.476 03:24:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.476 03:24:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:12.439 03:24:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:12.439 03:24:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:12.439 03:24:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.439 03:24:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:12.439 03:24:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:14.335 03:24:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:14.335 03:24:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:14.335 03:24:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:25:14.335 03:24:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:14.335 03:24:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.335 03:24:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:14.335 03:24:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.335 03:24:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:15.268 03:24:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:15.268 03:24:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:15.268 03:24:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.268 03:24:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:15.268 03:24:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:17.164 03:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:17.164 03:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:17.164 03:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:17.164 03:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:17.164 03:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.164 03:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:17.164 03:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.164 03:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:18.097 03:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:18.097 03:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:18.097 03:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.097 03:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:18.097 03:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:20.625 03:24:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:20.625 03:24:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:20.625 03:24:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:20.625 03:24:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:20.625 03:24:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.625 03:24:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:20.625 03:24:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.625 03:24:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:20.883 03:24:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:20.883 03:24:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:20.883 03:24:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.883 03:24:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:20.883 03:24:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:23.407 03:24:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:23.407 03:24:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:23.407 03:24:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:23.407 03:24:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:23.407 03:24:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.407 03:24:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:23.407 03:24:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.407 03:24:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:23.973 03:24:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:23.973 03:24:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:23.973 03:24:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.973 03:24:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:23.973 03:24:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:25.869 03:24:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:25.869 03:24:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:25.869 03:24:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:25.869 03:24:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:25.869 03:24:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.869 03:24:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:25.869 03:24:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.869 03:24:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:26.801 03:24:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:26.801 03:24:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:26.801 03:24:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.801 03:24:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:26.801 03:24:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:29.324 03:24:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:29.324 03:24:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:29.324 03:24:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:29.324 03:24:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:29.324 03:24:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.324 03:24:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:29.324 03:24:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:29.324 [global] 00:25:29.324 thread=1 00:25:29.324 invalidate=1 00:25:29.324 rw=read 00:25:29.324 time_based=1 00:25:29.324 runtime=10 00:25:29.324 ioengine=libaio 00:25:29.324 direct=1 00:25:29.324 bs=262144 00:25:29.324 iodepth=64 00:25:29.324 norandommap=1 00:25:29.324 numjobs=1 00:25:29.324 00:25:29.324 [job0] 00:25:29.324 filename=/dev/nvme0n1 00:25:29.324 [job1] 00:25:29.324 filename=/dev/nvme10n1 00:25:29.324 [job2] 00:25:29.324 filename=/dev/nvme1n1 00:25:29.324 [job3] 00:25:29.324 filename=/dev/nvme2n1 00:25:29.324 [job4] 00:25:29.324 filename=/dev/nvme3n1 00:25:29.324 [job5] 00:25:29.324 filename=/dev/nvme4n1 00:25:29.324 [job6] 00:25:29.324 filename=/dev/nvme5n1 00:25:29.324 [job7] 00:25:29.324 filename=/dev/nvme6n1 00:25:29.324 [job8] 00:25:29.324 filename=/dev/nvme7n1 00:25:29.324 [job9] 00:25:29.324 filename=/dev/nvme8n1 00:25:29.324 [job10] 00:25:29.324 filename=/dev/nvme9n1 00:25:29.324 Could not set queue depth (nvme0n1) 00:25:29.324 Could not set queue depth (nvme10n1) 00:25:29.324 Could not set queue depth (nvme1n1) 00:25:29.324 Could not set queue depth (nvme2n1) 00:25:29.324 Could not set queue depth (nvme3n1) 00:25:29.324 Could not set queue depth (nvme4n1) 00:25:29.324 Could not set queue depth (nvme5n1) 00:25:29.324 Could not set queue depth (nvme6n1) 00:25:29.324 Could not set queue depth (nvme7n1) 00:25:29.324 Could not set queue depth (nvme8n1) 00:25:29.324 Could not set queue depth (nvme9n1) 00:25:29.324 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.324 fio-3.35 00:25:29.324 Starting 11 threads 00:25:41.567 00:25:41.567 job0: (groupid=0, jobs=1): err= 0: pid=499832: Tue Jul 23 03:25:06 2024 00:25:41.567 read: IOPS=801, BW=200MiB/s (210MB/s)(2020MiB/10083msec) 00:25:41.567 slat (usec): min=8, max=89519, avg=811.46, stdev=3292.36 00:25:41.567 clat (msec): min=2, max=244, avg=78.96, stdev=39.65 00:25:41.567 lat (msec): min=2, max=258, avg=79.78, stdev=40.15 00:25:41.567 clat percentiles (msec): 00:25:41.567 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 32], 20.00th=[ 44], 00:25:41.567 | 30.00th=[ 55], 40.00th=[ 66], 50.00th=[ 78], 60.00th=[ 88], 00:25:41.567 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 134], 95.00th=[ 150], 00:25:41.567 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 213], 99.95th=[ 218], 00:25:41.567 | 99.99th=[ 245] 00:25:41.567 bw ( KiB/s): min=101888, max=363816, per=11.20%, avg=205162.45, stdev=66465.27, samples=20 00:25:41.567 iops : min= 398, max= 1421, avg=801.35, stdev=259.64, samples=20 00:25:41.567 lat (msec) : 4=0.33%, 10=1.26%, 20=3.82%, 50=20.69%, 100=47.87% 00:25:41.567 lat (msec) : 250=26.02% 00:25:41.567 cpu : usr=0.33%, sys=2.19%, ctx=2297, majf=0, minf=4097 00:25:41.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:41.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.568 issued rwts: total=8081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.568 job1: (groupid=0, jobs=1): err= 0: pid=499835: Tue Jul 23 03:25:06 2024 00:25:41.568 read: IOPS=729, BW=182MiB/s (191MB/s)(1839MiB/10084msec) 00:25:41.568 slat (usec): min=9, max=91402, avg=920.56, stdev=3715.72 00:25:41.568 clat (msec): min=2, max=214, avg=86.74, stdev=40.32 00:25:41.568 lat (msec): min=2, max=214, avg=87.66, stdev=40.78 00:25:41.568 clat percentiles (msec): 00:25:41.568 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 37], 20.00th=[ 52], 00:25:41.568 | 30.00th=[ 62], 40.00th=[ 74], 50.00th=[ 87], 60.00th=[ 95], 00:25:41.568 | 70.00th=[ 106], 80.00th=[ 121], 90.00th=[ 140], 95.00th=[ 159], 00:25:41.568 | 99.00th=[ 194], 99.50th=[ 199], 99.90th=[ 205], 99.95th=[ 209], 00:25:41.568 | 99.99th=[ 215] 00:25:41.568 bw ( KiB/s): min=107520, max=345421, per=10.19%, avg=186637.10, stdev=62358.27, samples=20 00:25:41.568 iops : min= 420, max= 1349, avg=728.95, stdev=243.57, samples=20 00:25:41.568 lat (msec) : 4=0.08%, 10=2.05%, 20=1.85%, 50=15.22%, 100=45.51% 00:25:41.568 lat (msec) : 250=35.29% 00:25:41.568 cpu : usr=0.32%, sys=1.88%, ctx=2141, majf=0, minf=4097 00:25:41.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:41.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.568 issued rwts: total=7357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.568 job2: (groupid=0, jobs=1): err= 0: pid=499836: Tue Jul 23 03:25:06 2024 00:25:41.568 read: IOPS=645, BW=161MiB/s (169MB/s)(1622MiB/10047msec) 00:25:41.568 slat (usec): min=9, max=218393, avg=1022.22, stdev=6424.03 00:25:41.568 clat (usec): min=1400, max=441164, avg=98011.78, stdev=53853.00 00:25:41.568 lat (usec): min=1444, max=442344, avg=99034.01, stdev=54705.53 00:25:41.568 clat percentiles (msec): 00:25:41.568 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 45], 20.00th=[ 57], 00:25:41.568 | 30.00th=[ 65], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 99], 00:25:41.568 | 70.00th=[ 115], 80.00th=[ 138], 90.00th=[ 169], 95.00th=[ 209], 00:25:41.568 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 296], 99.95th=[ 355], 00:25:41.568 | 99.99th=[ 443] 00:25:41.568 bw ( KiB/s): min=80545, max=295424, per=8.98%, avg=164447.80, stdev=60715.20, samples=20 00:25:41.568 iops : min= 314, max= 1154, avg=642.25, stdev=237.21, samples=20 00:25:41.568 lat (msec) : 2=0.05%, 4=0.17%, 10=0.60%, 20=1.37%, 50=11.16% 00:25:41.568 lat (msec) : 100=47.68%, 250=36.79%, 500=2.19% 00:25:41.568 cpu : usr=0.32%, sys=2.14%, ctx=1842, majf=0, minf=4097 00:25:41.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:41.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.568 issued rwts: total=6489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.568 job3: (groupid=0, jobs=1): err= 0: pid=499838: Tue Jul 23 03:25:06 2024 00:25:41.568 read: IOPS=813, BW=203MiB/s (213MB/s)(2051MiB/10088msec) 00:25:41.568 slat (usec): min=9, max=243965, avg=1094.01, stdev=4050.76 00:25:41.568 clat (msec): min=6, max=355, avg=77.53, stdev=42.69 00:25:41.568 lat (msec): min=6, max=355, avg=78.63, stdev=43.08 00:25:41.568 clat percentiles (msec): 00:25:41.568 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 43], 00:25:41.568 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 69], 60.00th=[ 81], 00:25:41.568 | 70.00th=[ 92], 80.00th=[ 106], 90.00th=[ 128], 95.00th=[ 150], 00:25:41.568 | 99.00th=[ 205], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 351], 00:25:41.568 | 99.99th=[ 355] 00:25:41.568 bw ( KiB/s): min=112640, max=401920, per=11.37%, avg=208350.35, stdev=87519.92, samples=20 00:25:41.568 iops : min= 440, max= 1570, avg=813.80, stdev=341.87, samples=20 00:25:41.568 lat (msec) : 10=0.01%, 20=0.11%, 50=30.18%, 100=45.50%, 250=23.44% 00:25:41.568 lat (msec) : 500=0.77% 00:25:41.568 cpu : usr=0.59%, sys=2.70%, ctx=1891, majf=0, minf=4097 00:25:41.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:41.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.568 issued rwts: total=8205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.568 job4: (groupid=0, jobs=1): err= 0: pid=499839: Tue Jul 23 03:25:06 2024 00:25:41.568 read: IOPS=461, BW=115MiB/s (121MB/s)(1163MiB/10087msec) 00:25:41.568 slat (usec): min=9, max=755810, avg=1764.22, stdev=14308.17 00:25:41.568 clat (msec): min=2, max=813, avg=136.87, stdev=121.33 00:25:41.568 lat (msec): min=2, max=994, avg=138.63, stdev=122.48 00:25:41.568 clat percentiles (msec): 00:25:41.568 | 1.00th=[ 19], 5.00th=[ 43], 10.00th=[ 59], 20.00th=[ 72], 00:25:41.568 | 30.00th=[ 80], 40.00th=[ 91], 50.00th=[ 100], 60.00th=[ 111], 00:25:41.568 | 70.00th=[ 133], 80.00th=[ 171], 90.00th=[ 264], 95.00th=[ 334], 00:25:41.568 | 99.00th=[ 785], 99.50th=[ 793], 99.90th=[ 802], 99.95th=[ 810], 00:25:41.568 | 99.99th=[ 818] 00:25:41.568 bw ( KiB/s): min=32191, max=237568, per=6.41%, avg=117456.40, stdev=67017.67, samples=20 00:25:41.568 iops : min= 125, max= 928, avg=458.70, stdev=261.86, samples=20 00:25:41.568 lat (msec) : 4=0.04%, 10=0.09%, 20=1.16%, 50=5.78%, 100=43.74% 00:25:41.568 lat (msec) : 250=38.51%, 500=8.17%, 750=1.16%, 1000=1.35% 00:25:41.568 cpu : usr=0.26%, sys=1.53%, ctx=1063, majf=0, minf=4097 00:25:41.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:41.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.568 issued rwts: total=4653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.568 job5: (groupid=0, jobs=1): err= 0: pid=499840: Tue Jul 23 03:25:06 2024 00:25:41.568 read: IOPS=578, BW=145MiB/s (152MB/s)(1458MiB/10084msec) 00:25:41.568 slat (usec): min=10, max=118610, avg=1321.11, stdev=4848.12 00:25:41.568 clat (msec): min=3, max=300, avg=109.28, stdev=52.69 00:25:41.568 lat (msec): min=3, max=300, avg=110.61, stdev=53.53 00:25:41.568 clat percentiles (msec): 00:25:41.568 | 1.00th=[ 19], 5.00th=[ 40], 10.00th=[ 56], 20.00th=[ 70], 00:25:41.568 | 30.00th=[ 79], 40.00th=[ 90], 50.00th=[ 100], 60.00th=[ 111], 00:25:41.568 | 70.00th=[ 123], 80.00th=[ 142], 90.00th=[ 188], 95.00th=[ 222], 00:25:41.568 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 300], 00:25:41.568 | 99.99th=[ 300] 00:25:41.568 bw ( KiB/s): min=74091, max=252416, per=8.06%, avg=147579.65, stdev=52799.30, samples=20 00:25:41.568 iops : min= 289, max= 986, avg=576.40, stdev=206.25, samples=20 00:25:41.568 lat (msec) : 4=0.07%, 10=0.17%, 20=0.99%, 50=6.57%, 100=42.61% 00:25:41.568 lat (msec) : 250=47.70%, 500=1.89% 00:25:41.568 cpu : usr=0.23%, sys=2.11%, ctx=1651, majf=0, minf=4097 00:25:41.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:41.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.568 issued rwts: total=5830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.568 job6: (groupid=0, jobs=1): err= 0: pid=499841: Tue Jul 23 03:25:06 2024 00:25:41.568 read: IOPS=580, BW=145MiB/s (152MB/s)(1454MiB/10024msec) 00:25:41.568 slat (usec): min=8, max=443508, avg=1186.87, stdev=10298.56 00:25:41.568 clat (msec): min=2, max=851, avg=109.04, stdev=112.98 00:25:41.568 lat (msec): min=2, max=861, avg=110.23, stdev=114.16 00:25:41.568 clat percentiles (msec): 00:25:41.568 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 53], 00:25:41.568 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 94], 00:25:41.568 | 70.00th=[ 110], 80.00th=[ 126], 90.00th=[ 157], 95.00th=[ 271], 00:25:41.568 | 99.00th=[ 693], 99.50th=[ 776], 99.90th=[ 835], 99.95th=[ 852], 00:25:41.568 | 99.99th=[ 852] 00:25:41.568 bw ( KiB/s): min=30147, max=281600, per=8.04%, avg=147223.10, stdev=73743.14, samples=20 00:25:41.568 iops : min= 117, max= 1100, avg=575.00, stdev=288.16, samples=20 00:25:41.568 lat (msec) : 4=0.03%, 10=1.26%, 20=3.23%, 50=14.05%, 100=46.10% 00:25:41.568 lat (msec) : 250=29.90%, 500=2.25%, 750=2.65%, 1000=0.53% 00:25:41.568 cpu : usr=0.31%, sys=1.84%, ctx=1564, majf=0, minf=4097 00:25:41.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:41.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.568 issued rwts: total=5816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.568 job7: (groupid=0, jobs=1): err= 0: pid=499842: Tue Jul 23 03:25:06 2024 00:25:41.568 read: IOPS=623, BW=156MiB/s (163MB/s)(1566MiB/10046msec) 00:25:41.568 slat (usec): min=8, max=244267, avg=965.70, stdev=6057.14 00:25:41.568 clat (msec): min=5, max=614, avg=101.61, stdev=63.79 00:25:41.568 lat (msec): min=5, max=614, avg=102.57, stdev=64.61 00:25:41.568 clat percentiles (msec): 00:25:41.568 | 1.00th=[ 11], 5.00th=[ 31], 10.00th=[ 47], 20.00th=[ 58], 00:25:41.568 | 30.00th=[ 68], 40.00th=[ 80], 50.00th=[ 90], 60.00th=[ 104], 00:25:41.568 | 70.00th=[ 118], 80.00th=[ 134], 90.00th=[ 165], 95.00th=[ 207], 00:25:41.568 | 99.00th=[ 368], 99.50th=[ 451], 99.90th=[ 617], 99.95th=[ 617], 00:25:41.568 | 99.99th=[ 617] 00:25:41.568 bw ( KiB/s): min=29184, max=287232, per=8.66%, avg=158684.30, stdev=56082.85, samples=20 00:25:41.568 iops : min= 114, max= 1122, avg=619.80, stdev=219.07, samples=20 00:25:41.568 lat (msec) : 10=0.94%, 20=1.69%, 50=9.42%, 100=45.35%, 250=40.63% 00:25:41.568 lat (msec) : 500=1.56%, 750=0.40% 00:25:41.569 cpu : usr=0.30%, sys=1.75%, ctx=1958, majf=0, minf=4097 00:25:41.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:41.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.569 issued rwts: total=6264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.569 job8: (groupid=0, jobs=1): err= 0: pid=499843: Tue Jul 23 03:25:06 2024 00:25:41.569 read: IOPS=781, BW=195MiB/s (205MB/s)(1962MiB/10047msec) 00:25:41.569 slat (usec): min=9, max=119686, avg=894.23, stdev=3617.37 00:25:41.569 clat (msec): min=2, max=272, avg=80.98, stdev=46.56 00:25:41.569 lat (msec): min=2, max=332, avg=81.87, stdev=47.02 00:25:41.569 clat percentiles (msec): 00:25:41.569 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 31], 20.00th=[ 42], 00:25:41.569 | 30.00th=[ 54], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 88], 00:25:41.569 | 70.00th=[ 97], 80.00th=[ 111], 90.00th=[ 138], 95.00th=[ 174], 00:25:41.569 | 99.00th=[ 228], 99.50th=[ 236], 99.90th=[ 266], 99.95th=[ 268], 00:25:41.569 | 99.99th=[ 271] 00:25:41.569 bw ( KiB/s): min=81408, max=334336, per=10.88%, avg=199215.50, stdev=56759.35, samples=20 00:25:41.569 iops : min= 318, max= 1306, avg=778.05, stdev=221.69, samples=20 00:25:41.569 lat (msec) : 4=0.29%, 10=3.67%, 20=3.64%, 50=19.99%, 100=44.80% 00:25:41.569 lat (msec) : 250=27.40%, 500=0.20% 00:25:41.569 cpu : usr=0.33%, sys=2.57%, ctx=2163, majf=0, minf=3721 00:25:41.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:41.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.569 issued rwts: total=7848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.569 job9: (groupid=0, jobs=1): err= 0: pid=499844: Tue Jul 23 03:25:06 2024 00:25:41.569 read: IOPS=602, BW=151MiB/s (158MB/s)(1518MiB/10079msec) 00:25:41.569 slat (usec): min=13, max=146484, avg=1460.53, stdev=5857.10 00:25:41.569 clat (msec): min=4, max=313, avg=104.68, stdev=56.17 00:25:41.569 lat (msec): min=4, max=338, avg=106.14, stdev=57.11 00:25:41.569 clat percentiles (msec): 00:25:41.569 | 1.00th=[ 11], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 59], 00:25:41.569 | 30.00th=[ 70], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 106], 00:25:41.569 | 70.00th=[ 126], 80.00th=[ 148], 90.00th=[ 188], 95.00th=[ 224], 00:25:41.569 | 99.00th=[ 262], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:25:41.569 | 99.99th=[ 313] 00:25:41.569 bw ( KiB/s): min=61828, max=360239, per=8.39%, avg=153714.60, stdev=74222.07, samples=20 00:25:41.569 iops : min= 241, max= 1407, avg=600.30, stdev=289.93, samples=20 00:25:41.569 lat (msec) : 10=0.86%, 20=1.35%, 50=12.65%, 100=41.91%, 250=41.30% 00:25:41.569 lat (msec) : 500=1.93% 00:25:41.569 cpu : usr=0.45%, sys=2.17%, ctx=1559, majf=0, minf=4097 00:25:41.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:41.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.569 issued rwts: total=6072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.569 job10: (groupid=0, jobs=1): err= 0: pid=499845: Tue Jul 23 03:25:06 2024 00:25:41.569 read: IOPS=551, BW=138MiB/s (145MB/s)(1391MiB/10086msec) 00:25:41.569 slat (usec): min=8, max=256447, avg=1286.98, stdev=8518.45 00:25:41.569 clat (usec): min=1266, max=768013, avg=114654.50, stdev=102147.95 00:25:41.569 lat (usec): min=1322, max=768030, avg=115941.48, stdev=103137.02 00:25:41.569 clat percentiles (msec): 00:25:41.569 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 53], 00:25:41.569 | 30.00th=[ 68], 40.00th=[ 80], 50.00th=[ 91], 60.00th=[ 100], 00:25:41.569 | 70.00th=[ 116], 80.00th=[ 146], 90.00th=[ 207], 95.00th=[ 359], 00:25:41.569 | 99.00th=[ 535], 99.50th=[ 735], 99.90th=[ 760], 99.95th=[ 768], 00:25:41.569 | 99.99th=[ 768] 00:25:41.569 bw ( KiB/s): min=45988, max=283136, per=7.68%, avg=140758.80, stdev=71868.72, samples=20 00:25:41.569 iops : min= 179, max= 1106, avg=549.75, stdev=280.84, samples=20 00:25:41.569 lat (msec) : 2=0.05%, 4=0.74%, 10=1.17%, 20=3.25%, 50=13.73% 00:25:41.569 lat (msec) : 100=41.93%, 250=32.67%, 500=5.37%, 750=0.92%, 1000=0.16% 00:25:41.569 cpu : usr=0.23%, sys=1.85%, ctx=1490, majf=0, minf=4097 00:25:41.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:41.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.569 issued rwts: total=5564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.569 00:25:41.569 Run status group 0 (all jobs): 00:25:41.569 READ: bw=1789MiB/s (1876MB/s), 115MiB/s-203MiB/s (121MB/s-213MB/s), io=17.6GiB (18.9GB), run=10024-10088msec 00:25:41.569 00:25:41.569 Disk stats (read/write): 00:25:41.569 nvme0n1: ios=16116/0, merge=0/0, ticks=1252752/0, in_queue=1252752, util=96.21% 00:25:41.569 nvme10n1: ios=14644/0, merge=0/0, ticks=1249258/0, in_queue=1249258, util=96.60% 00:25:41.569 nvme1n1: ios=12413/0, merge=0/0, ticks=1219993/0, in_queue=1219993, util=97.03% 00:25:41.569 nvme2n1: ios=16337/0, merge=0/0, ticks=1243315/0, in_queue=1243315, util=97.40% 00:25:41.569 nvme3n1: ios=9228/0, merge=0/0, ticks=1247980/0, in_queue=1247980, util=97.55% 00:25:41.569 nvme4n1: ios=11631/0, merge=0/0, ticks=1245797/0, in_queue=1245797, util=98.19% 00:25:41.569 nvme5n1: ios=11143/0, merge=0/0, ticks=1226491/0, in_queue=1226491, util=98.39% 00:25:41.569 nvme6n1: ios=12154/0, merge=0/0, ticks=1219588/0, in_queue=1219588, util=98.50% 00:25:41.569 nvme7n1: ios=15104/0, merge=0/0, ticks=1221344/0, in_queue=1221344, util=98.90% 00:25:41.569 nvme8n1: ios=12099/0, merge=0/0, ticks=1243744/0, in_queue=1243744, util=99.10% 00:25:41.569 nvme9n1: ios=11066/0, merge=0/0, ticks=1250608/0, in_queue=1250608, util=99.21% 00:25:41.569 03:25:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:41.569 [global] 00:25:41.569 thread=1 00:25:41.569 invalidate=1 00:25:41.569 rw=randwrite 00:25:41.569 time_based=1 00:25:41.569 runtime=10 00:25:41.569 ioengine=libaio 00:25:41.569 direct=1 00:25:41.569 bs=262144 00:25:41.569 iodepth=64 00:25:41.569 norandommap=1 00:25:41.569 numjobs=1 00:25:41.569 00:25:41.569 [job0] 00:25:41.569 filename=/dev/nvme0n1 00:25:41.569 [job1] 00:25:41.569 filename=/dev/nvme10n1 00:25:41.569 [job2] 00:25:41.569 filename=/dev/nvme1n1 00:25:41.569 [job3] 00:25:41.569 filename=/dev/nvme2n1 00:25:41.569 [job4] 00:25:41.569 filename=/dev/nvme3n1 00:25:41.569 [job5] 00:25:41.569 filename=/dev/nvme4n1 00:25:41.569 [job6] 00:25:41.569 filename=/dev/nvme5n1 00:25:41.569 [job7] 00:25:41.569 filename=/dev/nvme6n1 00:25:41.569 [job8] 00:25:41.569 filename=/dev/nvme7n1 00:25:41.569 [job9] 00:25:41.569 filename=/dev/nvme8n1 00:25:41.569 [job10] 00:25:41.569 filename=/dev/nvme9n1 00:25:41.569 Could not set queue depth (nvme0n1) 00:25:41.569 Could not set queue depth (nvme10n1) 00:25:41.569 Could not set queue depth (nvme1n1) 00:25:41.569 Could not set queue depth (nvme2n1) 00:25:41.569 Could not set queue depth (nvme3n1) 00:25:41.569 Could not set queue depth (nvme4n1) 00:25:41.569 Could not set queue depth (nvme5n1) 00:25:41.569 Could not set queue depth (nvme6n1) 00:25:41.569 Could not set queue depth (nvme7n1) 00:25:41.569 Could not set queue depth (nvme8n1) 00:25:41.569 Could not set queue depth (nvme9n1) 00:25:41.569 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.569 fio-3.35 00:25:41.569 Starting 11 threads 00:25:51.587 00:25:51.587 job0: (groupid=0, jobs=1): err= 0: pid=500884: Tue Jul 23 03:25:17 2024 00:25:51.587 write: IOPS=447, BW=112MiB/s (117MB/s)(1140MiB/10181msec); 0 zone resets 00:25:51.587 slat (usec): min=17, max=155353, avg=1650.89, stdev=5138.03 00:25:51.587 clat (usec): min=1614, max=535547, avg=141199.77, stdev=84397.36 00:25:51.587 lat (msec): min=2, max=540, avg=142.85, stdev=85.54 00:25:51.587 clat percentiles (msec): 00:25:51.587 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 48], 20.00th=[ 65], 00:25:51.587 | 30.00th=[ 77], 40.00th=[ 108], 50.00th=[ 138], 60.00th=[ 159], 00:25:51.587 | 70.00th=[ 180], 80.00th=[ 207], 90.00th=[ 253], 95.00th=[ 284], 00:25:51.587 | 99.00th=[ 422], 99.50th=[ 439], 99.90th=[ 518], 99.95th=[ 527], 00:25:51.587 | 99.99th=[ 535] 00:25:51.587 bw ( KiB/s): min=27190, max=246272, per=9.19%, avg=115043.05, stdev=55946.10, samples=20 00:25:51.587 iops : min= 106, max= 962, avg=449.30, stdev=218.56, samples=20 00:25:51.587 lat (msec) : 2=0.02%, 4=0.15%, 10=0.59%, 20=2.17%, 50=7.63% 00:25:51.587 lat (msec) : 100=27.37%, 250=51.66%, 500=10.20%, 750=0.20% 00:25:51.587 cpu : usr=1.50%, sys=1.78%, ctx=2424, majf=0, minf=1 00:25:51.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:51.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.587 issued rwts: total=0,4559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.587 job1: (groupid=0, jobs=1): err= 0: pid=500891: Tue Jul 23 03:25:17 2024 00:25:51.587 write: IOPS=488, BW=122MiB/s (128MB/s)(1227MiB/10055msec); 0 zone resets 00:25:51.587 slat (usec): min=20, max=178163, avg=1195.78, stdev=5402.70 00:25:51.587 clat (usec): min=1505, max=562416, avg=129874.88, stdev=89900.37 00:25:51.587 lat (usec): min=1579, max=562475, avg=131070.66, stdev=90892.12 00:25:51.587 clat percentiles (msec): 00:25:51.587 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 22], 20.00th=[ 47], 00:25:51.587 | 30.00th=[ 66], 40.00th=[ 96], 50.00th=[ 131], 60.00th=[ 146], 00:25:51.587 | 70.00th=[ 165], 80.00th=[ 199], 90.00th=[ 241], 95.00th=[ 288], 00:25:51.587 | 99.00th=[ 418], 99.50th=[ 439], 99.90th=[ 456], 99.95th=[ 567], 00:25:51.587 | 99.99th=[ 567] 00:25:51.587 bw ( KiB/s): min=30658, max=284672, per=9.91%, avg=124006.50, stdev=53216.43, samples=20 00:25:51.587 iops : min= 119, max= 1112, avg=484.35, stdev=207.95, samples=20 00:25:51.587 lat (msec) : 2=0.10%, 4=0.77%, 10=3.61%, 20=3.91%, 50=13.61% 00:25:51.587 lat (msec) : 100=19.25%, 250=49.88%, 500=8.80%, 750=0.06% 00:25:51.587 cpu : usr=1.35%, sys=1.64%, ctx=3188, majf=0, minf=1 00:25:51.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:51.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.587 issued rwts: total=0,4908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.587 job2: (groupid=0, jobs=1): err= 0: pid=500924: Tue Jul 23 03:25:17 2024 00:25:51.587 write: IOPS=435, BW=109MiB/s (114MB/s)(1107MiB/10161msec); 0 zone resets 00:25:51.587 slat (usec): min=23, max=264696, avg=1212.87, stdev=5796.83 00:25:51.587 clat (msec): min=2, max=806, avg=145.56, stdev=118.09 00:25:51.587 lat (msec): min=2, max=806, avg=146.77, stdev=118.65 00:25:51.587 clat percentiles (msec): 00:25:51.587 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 62], 00:25:51.587 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 107], 60.00th=[ 146], 00:25:51.587 | 70.00th=[ 176], 80.00th=[ 207], 90.00th=[ 271], 95.00th=[ 355], 00:25:51.587 | 99.00th=[ 743], 99.50th=[ 776], 99.90th=[ 793], 99.95th=[ 810], 00:25:51.587 | 99.99th=[ 810] 00:25:51.587 bw ( KiB/s): min=24625, max=194560, per=8.92%, avg=111658.55, stdev=50775.72, samples=20 00:25:51.587 iops : min= 96, max= 760, avg=436.05, stdev=198.33, samples=20 00:25:51.587 lat (msec) : 4=0.11%, 10=0.95%, 20=2.73%, 50=9.49%, 100=34.81% 00:25:51.587 lat (msec) : 250=39.80%, 500=10.41%, 750=0.77%, 1000=0.93% 00:25:51.587 cpu : usr=1.51%, sys=1.63%, ctx=2815, majf=0, minf=1 00:25:51.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:51.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.587 issued rwts: total=0,4427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.587 job3: (groupid=0, jobs=1): err= 0: pid=500974: Tue Jul 23 03:25:17 2024 00:25:51.587 write: IOPS=471, BW=118MiB/s (124MB/s)(1196MiB/10156msec); 0 zone resets 00:25:51.587 slat (usec): min=23, max=440974, avg=1736.05, stdev=7823.87 00:25:51.587 clat (usec): min=1958, max=830439, avg=134041.26, stdev=111469.54 00:25:51.587 lat (usec): min=1993, max=830504, avg=135777.31, stdev=113024.67 00:25:51.587 clat percentiles (msec): 00:25:51.587 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 46], 20.00th=[ 61], 00:25:51.587 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 101], 60.00th=[ 120], 00:25:51.587 | 70.00th=[ 159], 80.00th=[ 201], 90.00th=[ 247], 95.00th=[ 313], 00:25:51.587 | 99.00th=[ 768], 99.50th=[ 785], 99.90th=[ 827], 99.95th=[ 827], 00:25:51.587 | 99.99th=[ 827] 00:25:51.587 bw ( KiB/s): min=14336, max=301659, per=9.66%, avg=120891.40, stdev=70203.01, samples=20 00:25:51.587 iops : min= 56, max= 1178, avg=472.15, stdev=274.18, samples=20 00:25:51.587 lat (msec) : 2=0.02%, 4=0.15%, 10=1.21%, 20=2.55%, 50=10.32% 00:25:51.587 lat (msec) : 100=35.78%, 250=40.25%, 500=8.40%, 750=0.02%, 1000=1.30% 00:25:51.587 cpu : usr=1.69%, sys=1.41%, ctx=2263, majf=0, minf=1 00:25:51.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:51.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.587 issued rwts: total=0,4785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.587 job4: (groupid=0, jobs=1): err= 0: pid=500990: Tue Jul 23 03:25:17 2024 00:25:51.587 write: IOPS=394, BW=98.6MiB/s (103MB/s)(1002MiB/10161msec); 0 zone resets 00:25:51.587 slat (usec): min=18, max=123618, avg=1847.57, stdev=5818.50 00:25:51.587 clat (msec): min=2, max=820, avg=160.36, stdev=128.98 00:25:51.587 lat (msec): min=2, max=826, avg=162.21, stdev=130.29 00:25:51.587 clat percentiles (msec): 00:25:51.587 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 31], 00:25:51.587 | 30.00th=[ 87], 40.00th=[ 138], 50.00th=[ 163], 60.00th=[ 184], 00:25:51.587 | 70.00th=[ 203], 80.00th=[ 234], 90.00th=[ 292], 95.00th=[ 376], 00:25:51.587 | 99.00th=[ 768], 99.50th=[ 776], 99.90th=[ 802], 99.95th=[ 810], 00:25:51.587 | 99.99th=[ 818] 00:25:51.587 bw ( KiB/s): min=30268, max=189440, per=8.06%, avg=100940.80, stdev=41703.18, samples=20 00:25:51.587 iops : min= 118, max= 740, avg=394.20, stdev=162.92, samples=20 00:25:51.587 lat (msec) : 4=5.12%, 10=7.54%, 20=4.09%, 50=7.16%, 100=8.63% 00:25:51.587 lat (msec) : 250=52.11%, 500=13.75%, 750=0.17%, 1000=1.42% 00:25:51.587 cpu : usr=1.14%, sys=1.58%, ctx=2370, majf=0, minf=1 00:25:51.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:51.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.587 issued rwts: total=0,4007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.587 job5: (groupid=0, jobs=1): err= 0: pid=501046: Tue Jul 23 03:25:17 2024 00:25:51.587 write: IOPS=522, BW=131MiB/s (137MB/s)(1329MiB/10179msec); 0 zone resets 00:25:51.587 slat (usec): min=21, max=111074, avg=1271.39, stdev=4077.38 00:25:51.587 clat (msec): min=2, max=386, avg=120.06, stdev=70.44 00:25:51.587 lat (msec): min=2, max=386, avg=121.34, stdev=71.15 00:25:51.587 clat percentiles (msec): 00:25:51.587 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 25], 20.00th=[ 49], 00:25:51.587 | 30.00th=[ 83], 40.00th=[ 102], 50.00th=[ 118], 60.00th=[ 138], 00:25:51.587 | 70.00th=[ 153], 80.00th=[ 171], 90.00th=[ 211], 95.00th=[ 243], 00:25:51.587 | 99.00th=[ 313], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 384], 00:25:51.587 | 99.99th=[ 388] 00:25:51.587 bw ( KiB/s): min=74752, max=299008, per=10.74%, avg=134484.20, stdev=47904.98, samples=20 00:25:51.587 iops : min= 292, max= 1168, avg=525.25, stdev=187.13, samples=20 00:25:51.587 lat (msec) : 4=0.36%, 10=1.69%, 20=5.68%, 50=12.71%, 100=18.86% 00:25:51.587 lat (msec) : 250=56.57%, 500=4.12% 00:25:51.587 cpu : usr=1.50%, sys=1.75%, ctx=3199, majf=0, minf=1 00:25:51.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:51.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.588 issued rwts: total=0,5317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.588 job6: (groupid=0, jobs=1): err= 0: pid=501049: Tue Jul 23 03:25:17 2024 00:25:51.588 write: IOPS=426, BW=107MiB/s (112MB/s)(1084MiB/10160msec); 0 zone resets 00:25:51.588 slat (usec): min=16, max=333358, avg=1544.82, stdev=7261.35 00:25:51.588 clat (msec): min=2, max=851, avg=148.34, stdev=120.21 00:25:51.588 lat (msec): min=2, max=885, avg=149.89, stdev=121.17 00:25:51.588 clat percentiles (msec): 00:25:51.588 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 59], 00:25:51.588 | 30.00th=[ 81], 40.00th=[ 101], 50.00th=[ 133], 60.00th=[ 148], 00:25:51.588 | 70.00th=[ 182], 80.00th=[ 218], 90.00th=[ 264], 95.00th=[ 342], 00:25:51.588 | 99.00th=[ 776], 99.50th=[ 827], 99.90th=[ 844], 99.95th=[ 852], 00:25:51.588 | 99.99th=[ 852] 00:25:51.588 bw ( KiB/s): min=28729, max=224768, per=8.73%, avg=109332.05, stdev=48679.30, samples=20 00:25:51.588 iops : min= 112, max= 878, avg=427.00, stdev=190.16, samples=20 00:25:51.588 lat (msec) : 4=0.18%, 10=1.48%, 20=3.55%, 50=12.94%, 100=21.49% 00:25:51.588 lat (msec) : 250=48.13%, 500=10.70%, 750=0.37%, 1000=1.15% 00:25:51.588 cpu : usr=1.27%, sys=1.65%, ctx=2770, majf=0, minf=1 00:25:51.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:51.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.588 issued rwts: total=0,4336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.588 job7: (groupid=0, jobs=1): err= 0: pid=501050: Tue Jul 23 03:25:17 2024 00:25:51.588 write: IOPS=448, BW=112MiB/s (117MB/s)(1141MiB/10182msec); 0 zone resets 00:25:51.588 slat (usec): min=25, max=450894, avg=1960.49, stdev=8611.63 00:25:51.588 clat (msec): min=11, max=847, avg=140.71, stdev=113.12 00:25:51.588 lat (msec): min=11, max=848, avg=142.67, stdev=114.56 00:25:51.588 clat percentiles (msec): 00:25:51.588 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 66], 00:25:51.588 | 30.00th=[ 86], 40.00th=[ 93], 50.00th=[ 106], 60.00th=[ 124], 00:25:51.588 | 70.00th=[ 161], 80.00th=[ 190], 90.00th=[ 243], 95.00th=[ 342], 00:25:51.588 | 99.00th=[ 776], 99.50th=[ 802], 99.90th=[ 844], 99.95th=[ 852], 00:25:51.588 | 99.99th=[ 852] 00:25:51.588 bw ( KiB/s): min=14336, max=258043, per=9.20%, avg=115134.05, stdev=69073.76, samples=20 00:25:51.588 iops : min= 56, max= 1007, avg=449.60, stdev=269.75, samples=20 00:25:51.588 lat (msec) : 20=0.07%, 50=5.24%, 100=42.80%, 250=42.71%, 500=7.74% 00:25:51.588 lat (msec) : 750=0.09%, 1000=1.36% 00:25:51.588 cpu : usr=1.59%, sys=1.45%, ctx=1651, majf=0, minf=1 00:25:51.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:51.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.588 issued rwts: total=0,4563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.588 job8: (groupid=0, jobs=1): err= 0: pid=501051: Tue Jul 23 03:25:17 2024 00:25:51.588 write: IOPS=381, BW=95.4MiB/s (100MB/s)(969MiB/10154msec); 0 zone resets 00:25:51.588 slat (usec): min=17, max=448884, avg=1849.22, stdev=9091.15 00:25:51.588 clat (usec): min=1186, max=881477, avg=165730.04, stdev=123540.62 00:25:51.588 lat (usec): min=1226, max=881508, avg=167579.26, stdev=124649.51 00:25:51.588 clat percentiles (msec): 00:25:51.588 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 56], 00:25:51.588 | 30.00th=[ 104], 40.00th=[ 146], 50.00th=[ 163], 60.00th=[ 180], 00:25:51.588 | 70.00th=[ 203], 80.00th=[ 230], 90.00th=[ 279], 95.00th=[ 347], 00:25:51.588 | 99.00th=[ 810], 99.50th=[ 835], 99.90th=[ 877], 99.95th=[ 877], 00:25:51.588 | 99.99th=[ 885] 00:25:51.588 bw ( KiB/s): min=21504, max=199680, per=7.80%, avg=97598.10, stdev=37608.19, samples=20 00:25:51.588 iops : min= 84, max= 780, avg=381.20, stdev=146.94, samples=20 00:25:51.588 lat (msec) : 2=0.23%, 4=0.85%, 10=1.96%, 20=4.36%, 50=10.35% 00:25:51.588 lat (msec) : 100=12.02%, 250=56.04%, 500=12.56%, 750=0.10%, 1000=1.52% 00:25:51.588 cpu : usr=0.98%, sys=1.34%, ctx=2293, majf=0, minf=1 00:25:51.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:51.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.588 issued rwts: total=0,3876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.588 job9: (groupid=0, jobs=1): err= 0: pid=501052: Tue Jul 23 03:25:17 2024 00:25:51.588 write: IOPS=542, BW=136MiB/s (142MB/s)(1364MiB/10048msec); 0 zone resets 00:25:51.588 slat (usec): min=15, max=456174, avg=1530.52, stdev=7272.44 00:25:51.588 clat (msec): min=2, max=890, avg=116.30, stdev=104.78 00:25:51.588 lat (msec): min=3, max=890, avg=117.83, stdev=106.14 00:25:51.588 clat percentiles (msec): 00:25:51.588 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 41], 20.00th=[ 51], 00:25:51.588 | 30.00th=[ 69], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 108], 00:25:51.588 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 207], 95.00th=[ 241], 00:25:51.588 | 99.00th=[ 793], 99.50th=[ 835], 99.90th=[ 877], 99.95th=[ 894], 00:25:51.588 | 99.99th=[ 894] 00:25:51.588 bw ( KiB/s): min=16384, max=320000, per=11.02%, avg=138012.20, stdev=69406.23, samples=20 00:25:51.588 iops : min= 64, max= 1250, avg=539.05, stdev=271.08, samples=20 00:25:51.588 lat (msec) : 4=0.05%, 10=0.81%, 20=3.76%, 50=15.08%, 100=36.05% 00:25:51.588 lat (msec) : 250=39.77%, 500=3.32%, 1000=1.15% 00:25:51.588 cpu : usr=1.66%, sys=1.94%, ctx=2369, majf=0, minf=1 00:25:51.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:51.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.588 issued rwts: total=0,5456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.588 job10: (groupid=0, jobs=1): err= 0: pid=501053: Tue Jul 23 03:25:17 2024 00:25:51.588 write: IOPS=349, BW=87.4MiB/s (91.6MB/s)(890MiB/10177msec); 0 zone resets 00:25:51.588 slat (usec): min=23, max=104572, avg=2535.69, stdev=5611.85 00:25:51.588 clat (msec): min=6, max=442, avg=180.40, stdev=67.42 00:25:51.588 lat (msec): min=6, max=445, avg=182.94, stdev=68.13 00:25:51.588 clat percentiles (msec): 00:25:51.588 | 1.00th=[ 18], 5.00th=[ 55], 10.00th=[ 122], 20.00th=[ 142], 00:25:51.588 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 186], 00:25:51.588 | 70.00th=[ 197], 80.00th=[ 209], 90.00th=[ 264], 95.00th=[ 305], 00:25:51.588 | 99.00th=[ 405], 99.50th=[ 418], 99.90th=[ 435], 99.95th=[ 439], 00:25:51.588 | 99.99th=[ 443] 00:25:51.588 bw ( KiB/s): min=44120, max=142562, per=7.14%, avg=89446.60, stdev=21202.58, samples=20 00:25:51.588 iops : min= 172, max= 556, avg=349.30, stdev=82.77, samples=20 00:25:51.588 lat (msec) : 10=0.14%, 20=1.26%, 50=3.34%, 100=3.12%, 250=78.64% 00:25:51.588 lat (msec) : 500=13.49% 00:25:51.588 cpu : usr=1.14%, sys=1.22%, ctx=1352, majf=0, minf=1 00:25:51.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:25:51.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:51.588 issued rwts: total=0,3558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:51.588 00:25:51.588 Run status group 0 (all jobs): 00:25:51.588 WRITE: bw=1223MiB/s (1282MB/s), 87.4MiB/s-136MiB/s (91.6MB/s-142MB/s), io=12.2GiB (13.1GB), run=10048-10182msec 00:25:51.588 00:25:51.588 Disk stats (read/write): 00:25:51.588 nvme0n1: ios=49/9042, merge=0/0, ticks=408/1232074, in_queue=1232482, util=98.47% 00:25:51.588 nvme10n1: ios=50/9290, merge=0/0, ticks=96/1218846, in_queue=1218942, util=96.93% 00:25:51.588 nvme1n1: ios=45/8786, merge=0/0, ticks=577/1244982, in_queue=1245559, util=99.97% 00:25:51.588 nvme2n1: ios=14/9500, merge=0/0, ticks=38/1227668, in_queue=1227706, util=97.45% 00:25:51.588 nvme3n1: ios=48/7946, merge=0/0, ticks=98/1233697, in_queue=1233795, util=97.87% 00:25:51.588 nvme4n1: ios=38/10562, merge=0/0, ticks=503/1221605, in_queue=1222108, util=100.00% 00:25:51.588 nvme5n1: ios=0/8606, merge=0/0, ticks=0/1240264, in_queue=1240264, util=98.32% 00:25:51.588 nvme6n1: ios=47/9051, merge=0/0, ticks=559/1224412, in_queue=1224971, util=99.98% 00:25:51.588 nvme7n1: ios=43/7687, merge=0/0, ticks=2588/1229984, in_queue=1232572, util=99.98% 00:25:51.588 nvme8n1: ios=0/10320, merge=0/0, ticks=0/1207490, in_queue=1207490, util=99.02% 00:25:51.588 nvme9n1: ios=45/7047, merge=0/0, ticks=237/1225096, in_queue=1225333, util=100.00% 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:51.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.588 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:51.588 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.589 03:25:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:51.850 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.850 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:52.111 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.111 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:52.370 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:52.370 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:52.370 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:52.630 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:52.630 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:52.630 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:52.630 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.630 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.630 03:25:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.630 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.630 03:25:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:52.630 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:52.630 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:52.630 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:52.630 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:52.630 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.631 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:52.890 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.890 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.891 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:53.150 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:53.150 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.150 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:53.408 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:53.408 rmmod nvme_tcp 00:25:53.408 rmmod nvme_fabrics 00:25:53.408 rmmod nvme_keyring 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 495571 ']' 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 495571 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 495571 ']' 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 495571 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 495571 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 495571' 00:25:53.408 killing process with pid 495571 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 495571 00:25:53.408 03:25:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 495571 00:25:53.975 03:25:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:53.975 03:25:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:53.975 03:25:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:53.975 03:25:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:53.975 03:25:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:53.975 03:25:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.975 03:25:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.975 03:25:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.508 03:25:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:56.508 00:25:56.508 real 1m1.021s 00:25:56.508 user 3m26.257s 00:25:56.508 sys 0m23.272s 00:25:56.508 03:25:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:56.508 03:25:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.508 ************************************ 00:25:56.508 END TEST nvmf_multiconnection 00:25:56.508 ************************************ 00:25:56.508 03:25:22 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:56.508 03:25:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:56.508 03:25:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:56.508 03:25:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.508 ************************************ 00:25:56.508 START TEST nvmf_initiator_timeout 00:25:56.508 ************************************ 00:25:56.508 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:56.508 * Looking for test storage... 00:25:56.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:56.508 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:56.509 03:25:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:58.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:58.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:58.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:58.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.413 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:58.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:25:58.414 00:25:58.414 --- 10.0.0.2 ping statistics --- 00:25:58.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.414 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:25:58.414 00:25:58.414 --- 10.0.0.1 ping statistics --- 00:25:58.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.414 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=504321 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 504321 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 504321 ']' 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:58.414 03:25:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.414 [2024-07-23 03:25:24.839555] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:58.414 [2024-07-23 03:25:24.839647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.414 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.414 [2024-07-23 03:25:24.913500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.674 [2024-07-23 03:25:25.010232] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.674 [2024-07-23 03:25:25.010289] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.674 [2024-07-23 03:25:25.010306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.674 [2024-07-23 03:25:25.010320] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.674 [2024-07-23 03:25:25.010331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.674 [2024-07-23 03:25:25.010391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.674 [2024-07-23 03:25:25.010419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.674 [2024-07-23 03:25:25.010483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.674 [2024-07-23 03:25:25.010485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.674 Malloc0 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.674 Delay0 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.674 [2024-07-23 03:25:25.199692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.674 [2024-07-23 03:25:25.227959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.674 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:59.611 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:59.611 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:59.611 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.611 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:59.611 03:25:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=504676 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:01.516 03:25:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:01.516 [global] 00:26:01.516 thread=1 00:26:01.516 invalidate=1 00:26:01.516 rw=write 00:26:01.516 time_based=1 00:26:01.516 runtime=60 00:26:01.516 ioengine=libaio 00:26:01.516 direct=1 00:26:01.516 bs=4096 00:26:01.516 iodepth=1 00:26:01.516 norandommap=0 00:26:01.516 numjobs=1 00:26:01.516 00:26:01.516 verify_dump=1 00:26:01.516 verify_backlog=512 00:26:01.516 verify_state_save=0 00:26:01.516 do_verify=1 00:26:01.516 verify=crc32c-intel 00:26:01.516 [job0] 00:26:01.516 filename=/dev/nvme0n1 00:26:01.516 Could not set queue depth (nvme0n1) 00:26:01.516 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:01.516 fio-3.35 00:26:01.516 Starting 1 thread 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.809 true 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.809 true 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.809 true 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.809 true 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.809 03:25:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.341 true 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.341 true 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.341 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.601 true 00:26:07.601 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.601 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:07.601 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.601 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.601 true 00:26:07.601 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.601 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:07.601 03:25:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 504676 00:27:03.874 00:27:03.874 job0: (groupid=0, jobs=1): err= 0: pid=504745: Tue Jul 23 03:26:28 2024 00:27:03.874 read: IOPS=7, BW=29.9KiB/s (30.6kB/s)(1792KiB/60010msec) 00:27:03.874 slat (usec): min=8, max=7799, avg=32.39, stdev=367.81 00:27:03.874 clat (usec): min=572, max=41240k, avg=133411.84, stdev=1946465.18 00:27:03.874 lat (usec): min=589, max=41240k, avg=133444.22, stdev=1946464.41 00:27:03.874 clat percentiles (msec): 00:27:03.874 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:27:03.874 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 43], 00:27:03.874 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:27:03.874 | 99.00th=[ 43], 99.50th=[ 43], 99.90th=[17113], 99.95th=[17113], 00:27:03.874 | 99.99th=[17113] 00:27:03.874 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60010msec); 0 zone resets 00:27:03.874 slat (usec): min=7, max=28691, avg=71.07, stdev=1267.32 00:27:03.874 clat (usec): min=241, max=1327, avg=361.72, stdev=85.76 00:27:03.874 lat (usec): min=254, max=29008, avg=432.79, stdev=1268.34 00:27:03.874 clat percentiles (usec): 00:27:03.874 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 314], 00:27:03.874 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 363], 00:27:03.874 | 70.00th=[ 371], 80.00th=[ 412], 90.00th=[ 461], 95.00th=[ 478], 00:27:03.874 | 99.00th=[ 494], 99.50th=[ 881], 99.90th=[ 1336], 99.95th=[ 1336], 00:27:03.874 | 99.99th=[ 1336] 00:27:03.874 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:27:03.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:03.874 lat (usec) : 250=0.52%, 500=52.29%, 750=0.42%, 1000=0.21% 00:27:03.874 lat (msec) : 2=0.21%, 50=46.25%, >=2000=0.10% 00:27:03.874 cpu : usr=0.02%, sys=0.02%, ctx=964, majf=0, minf=2 00:27:03.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:03.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.874 issued rwts: total=448,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:03.874 00:27:03.874 Run status group 0 (all jobs): 00:27:03.874 READ: bw=29.9KiB/s (30.6kB/s), 29.9KiB/s-29.9KiB/s (30.6kB/s-30.6kB/s), io=1792KiB (1835kB), run=60010-60010msec 00:27:03.874 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60010-60010msec 00:27:03.874 00:27:03.874 Disk stats (read/write): 00:27:03.874 nvme0n1: ios=497/512, merge=0/0, ticks=19724/186, in_queue=19910, util=99.93% 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:03.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:03.874 nvmf hotplug test: fio successful as expected 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.874 rmmod nvme_tcp 00:27:03.874 rmmod nvme_fabrics 00:27:03.874 rmmod nvme_keyring 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 504321 ']' 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 504321 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 504321 ']' 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 504321 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 504321 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 504321' 00:27:03.874 killing process with pid 504321 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 504321 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 504321 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.874 03:26:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.133 03:26:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.133 00:27:04.133 real 1m8.177s 00:27:04.133 user 4m10.709s 00:27:04.133 sys 0m6.279s 00:27:04.133 03:26:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:04.133 03:26:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.133 ************************************ 00:27:04.133 END TEST nvmf_initiator_timeout 00:27:04.133 ************************************ 00:27:04.133 03:26:30 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:04.391 03:26:30 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:04.391 03:26:30 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:04.391 03:26:30 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.391 03:26:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.293 03:26:32 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:06.294 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:06.294 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:06.294 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:06.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:06.294 03:26:32 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:06.294 03:26:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:06.294 03:26:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:06.294 03:26:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.294 ************************************ 00:27:06.294 START TEST nvmf_perf_adq 00:27:06.294 ************************************ 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:06.294 * Looking for test storage... 00:27:06.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.294 03:26:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.197 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.198 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.198 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.198 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.198 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.198 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.198 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.198 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.198 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:08.458 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:08.458 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:08.458 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:08.458 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:08.458 03:26:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:09.027 03:26:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:10.929 03:26:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:16.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:16.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.194 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:16.195 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:16.195 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:16.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:27:16.195 00:27:16.195 --- 10.0.0.2 ping statistics --- 00:27:16.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.195 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:27:16.195 00:27:16.195 --- 10.0.0.1 ping statistics --- 00:27:16.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.195 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=516254 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 516254 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 516254 ']' 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:16.195 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.195 [2024-07-23 03:26:42.645625] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:16.195 [2024-07-23 03:26:42.645705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.195 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.195 [2024-07-23 03:26:42.717700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.454 [2024-07-23 03:26:42.811329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.454 [2024-07-23 03:26:42.811391] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.454 [2024-07-23 03:26:42.811409] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.454 [2024-07-23 03:26:42.811423] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.454 [2024-07-23 03:26:42.811434] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.454 [2024-07-23 03:26:42.811553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.454 [2024-07-23 03:26:42.811851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.454 [2024-07-23 03:26:42.811875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.454 [2024-07-23 03:26:42.811878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.454 03:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.712 [2024-07-23 03:26:43.037730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.712 Malloc1 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.712 [2024-07-23 03:26:43.090845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=516400 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:16.712 03:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:16.712 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:18.610 "tick_rate": 2700000000, 00:27:18.610 "poll_groups": [ 00:27:18.610 { 00:27:18.610 "name": "nvmf_tgt_poll_group_000", 00:27:18.610 "admin_qpairs": 1, 00:27:18.610 "io_qpairs": 1, 00:27:18.610 "current_admin_qpairs": 1, 00:27:18.610 "current_io_qpairs": 1, 00:27:18.610 "pending_bdev_io": 0, 00:27:18.610 "completed_nvme_io": 19986, 00:27:18.610 "transports": [ 00:27:18.610 { 00:27:18.610 "trtype": "TCP" 00:27:18.610 } 00:27:18.610 ] 00:27:18.610 }, 00:27:18.610 { 00:27:18.610 "name": "nvmf_tgt_poll_group_001", 00:27:18.610 "admin_qpairs": 0, 00:27:18.610 "io_qpairs": 1, 00:27:18.610 "current_admin_qpairs": 0, 00:27:18.610 "current_io_qpairs": 1, 00:27:18.610 "pending_bdev_io": 0, 00:27:18.610 "completed_nvme_io": 16486, 00:27:18.610 "transports": [ 00:27:18.610 { 00:27:18.610 "trtype": "TCP" 00:27:18.610 } 00:27:18.610 ] 00:27:18.610 }, 00:27:18.610 { 00:27:18.610 "name": "nvmf_tgt_poll_group_002", 00:27:18.610 "admin_qpairs": 0, 00:27:18.610 "io_qpairs": 1, 00:27:18.610 "current_admin_qpairs": 0, 00:27:18.610 "current_io_qpairs": 1, 00:27:18.610 "pending_bdev_io": 0, 00:27:18.610 "completed_nvme_io": 20923, 00:27:18.610 "transports": [ 00:27:18.610 { 00:27:18.610 "trtype": "TCP" 00:27:18.610 } 00:27:18.610 ] 00:27:18.610 }, 00:27:18.610 { 00:27:18.610 "name": "nvmf_tgt_poll_group_003", 00:27:18.610 "admin_qpairs": 0, 00:27:18.610 "io_qpairs": 1, 00:27:18.610 "current_admin_qpairs": 0, 00:27:18.610 "current_io_qpairs": 1, 00:27:18.610 "pending_bdev_io": 0, 00:27:18.610 "completed_nvme_io": 19986, 00:27:18.610 "transports": [ 00:27:18.610 { 00:27:18.610 "trtype": "TCP" 00:27:18.610 } 00:27:18.610 ] 00:27:18.610 } 00:27:18.610 ] 00:27:18.610 }' 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:18.610 03:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 516400 00:27:26.755 Initializing NVMe Controllers 00:27:26.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:26.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:26.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:26.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:26.755 Initialization complete. Launching workers. 00:27:26.755 ======================================================== 00:27:26.755 Latency(us) 00:27:26.755 Device Information : IOPS MiB/s Average min max 00:27:26.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10536.00 41.16 6075.25 3013.30 7870.28 00:27:26.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8605.70 33.62 7439.31 2090.02 12330.07 00:27:26.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10934.50 42.71 5852.78 1837.78 8434.28 00:27:26.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10480.20 40.94 6107.75 3290.13 7814.62 00:27:26.755 ======================================================== 00:27:26.755 Total : 40556.40 158.42 6313.11 1837.78 12330.07 00:27:26.755 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.755 rmmod nvme_tcp 00:27:26.755 rmmod nvme_fabrics 00:27:26.755 rmmod nvme_keyring 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 516254 ']' 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 516254 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 516254 ']' 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 516254 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 516254 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:26.755 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 516254' 00:27:26.755 killing process with pid 516254 00:27:26.756 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 516254 00:27:26.756 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 516254 00:27:27.013 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.013 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.013 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.013 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.013 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.013 03:26:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.013 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.013 03:26:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.548 03:26:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:29.548 03:26:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:29.548 03:26:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:29.806 03:26:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:31.708 03:26:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.977 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:36.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:36.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:36.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:36.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:27:36.978 00:27:36.978 --- 10.0.0.2 ping statistics --- 00:27:36.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.978 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:27:36.978 00:27:36.978 --- 10.0.0.1 ping statistics --- 00:27:36.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.978 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:36.978 net.core.busy_poll = 1 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:36.978 net.core.busy_read = 1 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=519116 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 519116 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 519116 ']' 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:36.978 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.978 [2024-07-23 03:27:03.522665] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:36.978 [2024-07-23 03:27:03.522759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.236 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.236 [2024-07-23 03:27:03.590221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.236 [2024-07-23 03:27:03.680324] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.236 [2024-07-23 03:27:03.680379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.236 [2024-07-23 03:27:03.680405] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.236 [2024-07-23 03:27:03.680418] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.237 [2024-07-23 03:27:03.680430] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.237 [2024-07-23 03:27:03.680510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.237 [2024-07-23 03:27:03.680533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.237 [2024-07-23 03:27:03.680653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.237 [2024-07-23 03:27:03.680657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.237 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.495 [2024-07-23 03:27:03.910575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.495 Malloc1 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.495 [2024-07-23 03:27:03.963777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=519155 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:37.495 03:27:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:37.495 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.022 03:27:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:40.022 03:27:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.022 03:27:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.022 03:27:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.022 03:27:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:40.022 "tick_rate": 2700000000, 00:27:40.022 "poll_groups": [ 00:27:40.022 { 00:27:40.022 "name": "nvmf_tgt_poll_group_000", 00:27:40.022 "admin_qpairs": 1, 00:27:40.022 "io_qpairs": 2, 00:27:40.022 "current_admin_qpairs": 1, 00:27:40.022 "current_io_qpairs": 2, 00:27:40.022 "pending_bdev_io": 0, 00:27:40.022 "completed_nvme_io": 25437, 00:27:40.022 "transports": [ 00:27:40.022 { 00:27:40.022 "trtype": "TCP" 00:27:40.022 } 00:27:40.022 ] 00:27:40.022 }, 00:27:40.022 { 00:27:40.022 "name": "nvmf_tgt_poll_group_001", 00:27:40.022 "admin_qpairs": 0, 00:27:40.022 "io_qpairs": 2, 00:27:40.022 "current_admin_qpairs": 0, 00:27:40.022 "current_io_qpairs": 2, 00:27:40.022 "pending_bdev_io": 0, 00:27:40.022 "completed_nvme_io": 26529, 00:27:40.022 "transports": [ 00:27:40.022 { 00:27:40.022 "trtype": "TCP" 00:27:40.022 } 00:27:40.022 ] 00:27:40.022 }, 00:27:40.022 { 00:27:40.022 "name": "nvmf_tgt_poll_group_002", 00:27:40.022 "admin_qpairs": 0, 00:27:40.022 "io_qpairs": 0, 00:27:40.022 "current_admin_qpairs": 0, 00:27:40.022 "current_io_qpairs": 0, 00:27:40.022 "pending_bdev_io": 0, 00:27:40.022 "completed_nvme_io": 0, 00:27:40.022 "transports": [ 00:27:40.022 { 00:27:40.022 "trtype": "TCP" 00:27:40.022 } 00:27:40.022 ] 00:27:40.022 }, 00:27:40.022 { 00:27:40.022 "name": "nvmf_tgt_poll_group_003", 00:27:40.022 "admin_qpairs": 0, 00:27:40.022 "io_qpairs": 0, 00:27:40.022 "current_admin_qpairs": 0, 00:27:40.022 "current_io_qpairs": 0, 00:27:40.022 "pending_bdev_io": 0, 00:27:40.022 "completed_nvme_io": 0, 00:27:40.022 "transports": [ 00:27:40.022 { 00:27:40.022 "trtype": "TCP" 00:27:40.022 } 00:27:40.022 ] 00:27:40.022 } 00:27:40.022 ] 00:27:40.022 }' 00:27:40.022 03:27:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:40.022 03:27:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:40.022 03:27:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:40.022 03:27:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:40.022 03:27:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 519155 00:27:48.130 Initializing NVMe Controllers 00:27:48.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:48.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:48.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:48.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:48.130 Initialization complete. Launching workers. 00:27:48.130 ======================================================== 00:27:48.130 Latency(us) 00:27:48.130 Device Information : IOPS MiB/s Average min max 00:27:48.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8747.20 34.17 7318.23 1704.81 53973.70 00:27:48.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5084.80 19.86 12634.50 1979.46 54546.47 00:27:48.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6286.90 24.56 10182.73 1527.94 56468.04 00:27:48.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6961.30 27.19 9196.35 1805.44 55267.22 00:27:48.130 ======================================================== 00:27:48.130 Total : 27080.20 105.78 9464.27 1527.94 56468.04 00:27:48.130 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.130 rmmod nvme_tcp 00:27:48.130 rmmod nvme_fabrics 00:27:48.130 rmmod nvme_keyring 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 519116 ']' 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 519116 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 519116 ']' 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 519116 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 519116 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 519116' 00:27:48.130 killing process with pid 519116 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 519116 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 519116 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.130 03:27:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.035 03:27:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.035 03:27:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:50.035 00:27:50.035 real 0m43.833s 00:27:50.035 user 2m33.253s 00:27:50.035 sys 0m11.890s 00:27:50.035 03:27:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:50.035 03:27:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.035 ************************************ 00:27:50.035 END TEST nvmf_perf_adq 00:27:50.035 ************************************ 00:27:50.035 03:27:16 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:50.035 03:27:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:50.035 03:27:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:50.035 03:27:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.035 ************************************ 00:27:50.035 START TEST nvmf_shutdown 00:27:50.035 ************************************ 00:27:50.035 03:27:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:50.294 * Looking for test storage... 00:27:50.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.294 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.295 ************************************ 00:27:50.295 START TEST nvmf_shutdown_tc1 00:27:50.295 ************************************ 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.295 03:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:52.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:52.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:52.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:52.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.233 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:27:52.234 00:27:52.234 --- 10.0.0.2 ping statistics --- 00:27:52.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.234 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:27:52.234 00:27:52.234 --- 10.0.0.1 ping statistics --- 00:27:52.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.234 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=522814 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 522814 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 522814 ']' 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:52.234 03:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.492 [2024-07-23 03:27:18.844904] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:52.492 [2024-07-23 03:27:18.844986] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.492 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.492 [2024-07-23 03:27:18.919434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.492 [2024-07-23 03:27:19.011407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.492 [2024-07-23 03:27:19.011477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.492 [2024-07-23 03:27:19.011505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.492 [2024-07-23 03:27:19.011518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.492 [2024-07-23 03:27:19.011531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.492 [2024-07-23 03:27:19.011646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.492 [2024-07-23 03:27:19.011742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.492 [2024-07-23 03:27:19.011802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:52.492 [2024-07-23 03:27:19.011804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.751 [2024-07-23 03:27:19.151233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.751 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.751 Malloc1 00:27:52.751 [2024-07-23 03:27:19.226325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.751 Malloc2 00:27:52.751 Malloc3 00:27:53.010 Malloc4 00:27:53.010 Malloc5 00:27:53.010 Malloc6 00:27:53.010 Malloc7 00:27:53.010 Malloc8 00:27:53.269 Malloc9 00:27:53.269 Malloc10 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=522987 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 522987 /var/tmp/bdevperf.sock 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 522987 ']' 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.269 { 00:27:53.269 "params": { 00:27:53.269 "name": "Nvme$subsystem", 00:27:53.269 "trtype": "$TEST_TRANSPORT", 00:27:53.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.269 "adrfam": "ipv4", 00:27:53.269 "trsvcid": "$NVMF_PORT", 00:27:53.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.269 "hdgst": ${hdgst:-false}, 00:27:53.269 "ddgst": ${ddgst:-false} 00:27:53.269 }, 00:27:53.269 "method": "bdev_nvme_attach_controller" 00:27:53.269 } 00:27:53.269 EOF 00:27:53.269 )") 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.269 { 00:27:53.269 "params": { 00:27:53.269 "name": "Nvme$subsystem", 00:27:53.269 "trtype": "$TEST_TRANSPORT", 00:27:53.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.269 "adrfam": "ipv4", 00:27:53.269 "trsvcid": "$NVMF_PORT", 00:27:53.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.269 "hdgst": ${hdgst:-false}, 00:27:53.269 "ddgst": ${ddgst:-false} 00:27:53.269 }, 00:27:53.269 "method": "bdev_nvme_attach_controller" 00:27:53.269 } 00:27:53.269 EOF 00:27:53.269 )") 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.269 { 00:27:53.269 "params": { 00:27:53.269 "name": "Nvme$subsystem", 00:27:53.269 "trtype": "$TEST_TRANSPORT", 00:27:53.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.269 "adrfam": "ipv4", 00:27:53.269 "trsvcid": "$NVMF_PORT", 00:27:53.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.269 "hdgst": ${hdgst:-false}, 00:27:53.269 "ddgst": ${ddgst:-false} 00:27:53.269 }, 00:27:53.269 "method": "bdev_nvme_attach_controller" 00:27:53.269 } 00:27:53.269 EOF 00:27:53.269 )") 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.269 { 00:27:53.269 "params": { 00:27:53.269 "name": "Nvme$subsystem", 00:27:53.269 "trtype": "$TEST_TRANSPORT", 00:27:53.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.269 "adrfam": "ipv4", 00:27:53.269 "trsvcid": "$NVMF_PORT", 00:27:53.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.269 "hdgst": ${hdgst:-false}, 00:27:53.269 "ddgst": ${ddgst:-false} 00:27:53.269 }, 00:27:53.269 "method": "bdev_nvme_attach_controller" 00:27:53.269 } 00:27:53.269 EOF 00:27:53.269 )") 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.269 { 00:27:53.269 "params": { 00:27:53.269 "name": "Nvme$subsystem", 00:27:53.269 "trtype": "$TEST_TRANSPORT", 00:27:53.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.269 "adrfam": "ipv4", 00:27:53.269 "trsvcid": "$NVMF_PORT", 00:27:53.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.269 "hdgst": ${hdgst:-false}, 00:27:53.269 "ddgst": ${ddgst:-false} 00:27:53.269 }, 00:27:53.269 "method": "bdev_nvme_attach_controller" 00:27:53.269 } 00:27:53.269 EOF 00:27:53.269 )") 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.269 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.269 { 00:27:53.269 "params": { 00:27:53.269 "name": "Nvme$subsystem", 00:27:53.269 "trtype": "$TEST_TRANSPORT", 00:27:53.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.269 "adrfam": "ipv4", 00:27:53.269 "trsvcid": "$NVMF_PORT", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.270 "hdgst": ${hdgst:-false}, 00:27:53.270 "ddgst": ${ddgst:-false} 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 } 00:27:53.270 EOF 00:27:53.270 )") 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.270 { 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme$subsystem", 00:27:53.270 "trtype": "$TEST_TRANSPORT", 00:27:53.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "$NVMF_PORT", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.270 "hdgst": ${hdgst:-false}, 00:27:53.270 "ddgst": ${ddgst:-false} 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 } 00:27:53.270 EOF 00:27:53.270 )") 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.270 { 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme$subsystem", 00:27:53.270 "trtype": "$TEST_TRANSPORT", 00:27:53.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "$NVMF_PORT", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.270 "hdgst": ${hdgst:-false}, 00:27:53.270 "ddgst": ${ddgst:-false} 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 } 00:27:53.270 EOF 00:27:53.270 )") 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.270 { 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme$subsystem", 00:27:53.270 "trtype": "$TEST_TRANSPORT", 00:27:53.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "$NVMF_PORT", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.270 "hdgst": ${hdgst:-false}, 00:27:53.270 "ddgst": ${ddgst:-false} 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 } 00:27:53.270 EOF 00:27:53.270 )") 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.270 { 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme$subsystem", 00:27:53.270 "trtype": "$TEST_TRANSPORT", 00:27:53.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "$NVMF_PORT", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.270 "hdgst": ${hdgst:-false}, 00:27:53.270 "ddgst": ${ddgst:-false} 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 } 00:27:53.270 EOF 00:27:53.270 )") 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:53.270 03:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme1", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme2", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme3", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme4", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme5", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme6", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme7", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme8", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme9", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 },{ 00:27:53.270 "params": { 00:27:53.270 "name": "Nvme10", 00:27:53.270 "trtype": "tcp", 00:27:53.270 "traddr": "10.0.0.2", 00:27:53.270 "adrfam": "ipv4", 00:27:53.270 "trsvcid": "4420", 00:27:53.270 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:53.270 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:53.270 "hdgst": false, 00:27:53.270 "ddgst": false 00:27:53.270 }, 00:27:53.270 "method": "bdev_nvme_attach_controller" 00:27:53.270 }' 00:27:53.270 [2024-07-23 03:27:19.736164] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:53.270 [2024-07-23 03:27:19.736235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:53.270 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.270 [2024-07-23 03:27:19.800264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.529 [2024-07-23 03:27:19.887356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 522987 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:55.428 03:27:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:56.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 522987 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:56.362 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 522814 00:27:56.362 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:56.362 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:56.362 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:56.362 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.363 { 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme$subsystem", 00:27:56.363 "trtype": "$TEST_TRANSPORT", 00:27:56.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "$NVMF_PORT", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.363 "hdgst": ${hdgst:-false}, 00:27:56.363 "ddgst": ${ddgst:-false} 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 } 00:27:56.363 EOF 00:27:56.363 )") 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:56.363 03:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme1", 00:27:56.363 "trtype": "tcp", 00:27:56.363 "traddr": "10.0.0.2", 00:27:56.363 "adrfam": "ipv4", 00:27:56.363 "trsvcid": "4420", 00:27:56.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:56.363 "hdgst": false, 00:27:56.363 "ddgst": false 00:27:56.363 }, 00:27:56.363 "method": "bdev_nvme_attach_controller" 00:27:56.363 },{ 00:27:56.363 "params": { 00:27:56.363 "name": "Nvme2", 00:27:56.363 "trtype": "tcp", 00:27:56.363 "traddr": "10.0.0.2", 00:27:56.363 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 },{ 00:27:56.364 "params": { 00:27:56.364 "name": "Nvme3", 00:27:56.364 "trtype": "tcp", 00:27:56.364 "traddr": "10.0.0.2", 00:27:56.364 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 },{ 00:27:56.364 "params": { 00:27:56.364 "name": "Nvme4", 00:27:56.364 "trtype": "tcp", 00:27:56.364 "traddr": "10.0.0.2", 00:27:56.364 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 },{ 00:27:56.364 "params": { 00:27:56.364 "name": "Nvme5", 00:27:56.364 "trtype": "tcp", 00:27:56.364 "traddr": "10.0.0.2", 00:27:56.364 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 },{ 00:27:56.364 "params": { 00:27:56.364 "name": "Nvme6", 00:27:56.364 "trtype": "tcp", 00:27:56.364 "traddr": "10.0.0.2", 00:27:56.364 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 },{ 00:27:56.364 "params": { 00:27:56.364 "name": "Nvme7", 00:27:56.364 "trtype": "tcp", 00:27:56.364 "traddr": "10.0.0.2", 00:27:56.364 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 },{ 00:27:56.364 "params": { 00:27:56.364 "name": "Nvme8", 00:27:56.364 "trtype": "tcp", 00:27:56.364 "traddr": "10.0.0.2", 00:27:56.364 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 },{ 00:27:56.364 "params": { 00:27:56.364 "name": "Nvme9", 00:27:56.364 "trtype": "tcp", 00:27:56.364 "traddr": "10.0.0.2", 00:27:56.364 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 },{ 00:27:56.364 "params": { 00:27:56.364 "name": "Nvme10", 00:27:56.364 "trtype": "tcp", 00:27:56.364 "traddr": "10.0.0.2", 00:27:56.364 "adrfam": "ipv4", 00:27:56.364 "trsvcid": "4420", 00:27:56.364 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:56.364 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:56.364 "hdgst": false, 00:27:56.364 "ddgst": false 00:27:56.364 }, 00:27:56.364 "method": "bdev_nvme_attach_controller" 00:27:56.364 }' 00:27:56.364 [2024-07-23 03:27:22.777670] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:56.364 [2024-07-23 03:27:22.777748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523293 ] 00:27:56.364 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.364 [2024-07-23 03:27:22.847302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.364 [2024-07-23 03:27:22.934864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.260 Running I/O for 1 seconds... 00:27:59.193 00:27:59.193 Latency(us) 00:27:59.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.193 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme1n1 : 1.05 182.01 11.38 0.00 0.00 347961.27 27962.03 326223.64 00:27:59.193 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme2n1 : 1.18 216.61 13.54 0.00 0.00 287997.35 23592.96 318456.41 00:27:59.193 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme3n1 : 1.19 215.84 13.49 0.00 0.00 284445.20 22330.79 324670.20 00:27:59.193 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme4n1 : 1.17 218.26 13.64 0.00 0.00 276565.14 21554.06 318456.41 00:27:59.193 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme5n1 : 1.19 214.59 13.41 0.00 0.00 276946.68 23010.42 320009.86 00:27:59.193 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme6n1 : 1.13 170.10 10.63 0.00 0.00 341937.62 28156.21 323116.75 00:27:59.193 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme7n1 : 1.20 213.65 13.35 0.00 0.00 269195.95 27379.48 298261.62 00:27:59.193 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme8n1 : 1.16 165.39 10.34 0.00 0.00 340694.66 28350.39 324670.20 00:27:59.193 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme9n1 : 1.20 212.62 13.29 0.00 0.00 261274.55 7427.41 323116.75 00:27:59.193 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.193 Verification LBA range: start 0x0 length 0x400 00:27:59.193 Nvme10n1 : 1.17 164.46 10.28 0.00 0.00 331361.34 25437.68 349525.33 00:27:59.193 =================================================================================================================== 00:27:59.193 Total : 1973.53 123.35 0.00 0.00 297543.45 7427.41 349525.33 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:59.451 03:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:59.451 rmmod nvme_tcp 00:27:59.451 rmmod nvme_fabrics 00:27:59.451 rmmod nvme_keyring 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 522814 ']' 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 522814 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 522814 ']' 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 522814 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:59.451 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 522814 00:27:59.709 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:59.709 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:59.709 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 522814' 00:27:59.709 killing process with pid 522814 00:27:59.709 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 522814 00:27:59.709 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 522814 00:27:59.967 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:59.967 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:59.967 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:59.967 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.967 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:59.967 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.967 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.967 03:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:02.502 00:28:02.502 real 0m11.897s 00:28:02.502 user 0m34.875s 00:28:02.502 sys 0m3.118s 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.502 ************************************ 00:28:02.502 END TEST nvmf_shutdown_tc1 00:28:02.502 ************************************ 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:02.502 ************************************ 00:28:02.502 START TEST nvmf_shutdown_tc2 00:28:02.502 ************************************ 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.502 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:02.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:02.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:02.503 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:02.503 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:02.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:28:02.503 00:28:02.503 --- 10.0.0.2 ping statistics --- 00:28:02.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.503 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:02.503 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:28:02.503 00:28:02.503 --- 10.0.0.1 ping statistics --- 00:28:02.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.504 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=524173 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 524173 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 524173 ']' 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:02.504 03:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.504 [2024-07-23 03:27:28.830311] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:02.504 [2024-07-23 03:27:28.830380] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.504 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.504 [2024-07-23 03:27:28.894416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.504 [2024-07-23 03:27:28.979901] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.504 [2024-07-23 03:27:28.979949] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.504 [2024-07-23 03:27:28.979972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.504 [2024-07-23 03:27:28.979983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.504 [2024-07-23 03:27:28.979992] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.504 [2024-07-23 03:27:28.980072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.504 [2024-07-23 03:27:28.980136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.504 [2024-07-23 03:27:28.980203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:02.504 [2024-07-23 03:27:28.980207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.763 [2024-07-23 03:27:29.141454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.763 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.763 Malloc1 00:28:02.763 [2024-07-23 03:27:29.217506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.763 Malloc2 00:28:02.763 Malloc3 00:28:03.022 Malloc4 00:28:03.022 Malloc5 00:28:03.022 Malloc6 00:28:03.022 Malloc7 00:28:03.022 Malloc8 00:28:03.281 Malloc9 00:28:03.281 Malloc10 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=524305 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 524305 /var/tmp/bdevperf.sock 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 524305 ']' 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:03.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.281 { 00:28:03.281 "params": { 00:28:03.281 "name": "Nvme$subsystem", 00:28:03.281 "trtype": "$TEST_TRANSPORT", 00:28:03.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.281 "adrfam": "ipv4", 00:28:03.281 "trsvcid": "$NVMF_PORT", 00:28:03.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.281 "hdgst": ${hdgst:-false}, 00:28:03.281 "ddgst": ${ddgst:-false} 00:28:03.281 }, 00:28:03.281 "method": "bdev_nvme_attach_controller" 00:28:03.281 } 00:28:03.281 EOF 00:28:03.281 )") 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.281 { 00:28:03.281 "params": { 00:28:03.281 "name": "Nvme$subsystem", 00:28:03.281 "trtype": "$TEST_TRANSPORT", 00:28:03.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.281 "adrfam": "ipv4", 00:28:03.281 "trsvcid": "$NVMF_PORT", 00:28:03.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.281 "hdgst": ${hdgst:-false}, 00:28:03.281 "ddgst": ${ddgst:-false} 00:28:03.281 }, 00:28:03.281 "method": "bdev_nvme_attach_controller" 00:28:03.281 } 00:28:03.281 EOF 00:28:03.281 )") 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.281 { 00:28:03.281 "params": { 00:28:03.281 "name": "Nvme$subsystem", 00:28:03.281 "trtype": "$TEST_TRANSPORT", 00:28:03.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.281 "adrfam": "ipv4", 00:28:03.281 "trsvcid": "$NVMF_PORT", 00:28:03.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.281 "hdgst": ${hdgst:-false}, 00:28:03.281 "ddgst": ${ddgst:-false} 00:28:03.281 }, 00:28:03.281 "method": "bdev_nvme_attach_controller" 00:28:03.281 } 00:28:03.281 EOF 00:28:03.281 )") 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.281 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.281 { 00:28:03.281 "params": { 00:28:03.281 "name": "Nvme$subsystem", 00:28:03.281 "trtype": "$TEST_TRANSPORT", 00:28:03.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "$NVMF_PORT", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.282 "hdgst": ${hdgst:-false}, 00:28:03.282 "ddgst": ${ddgst:-false} 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 } 00:28:03.282 EOF 00:28:03.282 )") 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.282 { 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme$subsystem", 00:28:03.282 "trtype": "$TEST_TRANSPORT", 00:28:03.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "$NVMF_PORT", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.282 "hdgst": ${hdgst:-false}, 00:28:03.282 "ddgst": ${ddgst:-false} 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 } 00:28:03.282 EOF 00:28:03.282 )") 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.282 { 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme$subsystem", 00:28:03.282 "trtype": "$TEST_TRANSPORT", 00:28:03.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "$NVMF_PORT", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.282 "hdgst": ${hdgst:-false}, 00:28:03.282 "ddgst": ${ddgst:-false} 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 } 00:28:03.282 EOF 00:28:03.282 )") 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.282 { 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme$subsystem", 00:28:03.282 "trtype": "$TEST_TRANSPORT", 00:28:03.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "$NVMF_PORT", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.282 "hdgst": ${hdgst:-false}, 00:28:03.282 "ddgst": ${ddgst:-false} 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 } 00:28:03.282 EOF 00:28:03.282 )") 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.282 { 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme$subsystem", 00:28:03.282 "trtype": "$TEST_TRANSPORT", 00:28:03.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "$NVMF_PORT", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.282 "hdgst": ${hdgst:-false}, 00:28:03.282 "ddgst": ${ddgst:-false} 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 } 00:28:03.282 EOF 00:28:03.282 )") 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.282 { 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme$subsystem", 00:28:03.282 "trtype": "$TEST_TRANSPORT", 00:28:03.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "$NVMF_PORT", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.282 "hdgst": ${hdgst:-false}, 00:28:03.282 "ddgst": ${ddgst:-false} 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 } 00:28:03.282 EOF 00:28:03.282 )") 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.282 { 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme$subsystem", 00:28:03.282 "trtype": "$TEST_TRANSPORT", 00:28:03.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "$NVMF_PORT", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.282 "hdgst": ${hdgst:-false}, 00:28:03.282 "ddgst": ${ddgst:-false} 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 } 00:28:03.282 EOF 00:28:03.282 )") 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:03.282 03:27:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme1", 00:28:03.282 "trtype": "tcp", 00:28:03.282 "traddr": "10.0.0.2", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "4420", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.282 "hdgst": false, 00:28:03.282 "ddgst": false 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 },{ 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme2", 00:28:03.282 "trtype": "tcp", 00:28:03.282 "traddr": "10.0.0.2", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "4420", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:03.282 "hdgst": false, 00:28:03.282 "ddgst": false 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 },{ 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme3", 00:28:03.282 "trtype": "tcp", 00:28:03.282 "traddr": "10.0.0.2", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "4420", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:03.282 "hdgst": false, 00:28:03.282 "ddgst": false 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 },{ 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme4", 00:28:03.282 "trtype": "tcp", 00:28:03.282 "traddr": "10.0.0.2", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "4420", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:03.282 "hdgst": false, 00:28:03.282 "ddgst": false 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 },{ 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme5", 00:28:03.282 "trtype": "tcp", 00:28:03.282 "traddr": "10.0.0.2", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "4420", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:03.282 "hdgst": false, 00:28:03.282 "ddgst": false 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 },{ 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme6", 00:28:03.282 "trtype": "tcp", 00:28:03.282 "traddr": "10.0.0.2", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "4420", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:03.282 "hdgst": false, 00:28:03.282 "ddgst": false 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 },{ 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme7", 00:28:03.282 "trtype": "tcp", 00:28:03.282 "traddr": "10.0.0.2", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "4420", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:03.282 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:03.282 "hdgst": false, 00:28:03.282 "ddgst": false 00:28:03.282 }, 00:28:03.282 "method": "bdev_nvme_attach_controller" 00:28:03.282 },{ 00:28:03.282 "params": { 00:28:03.282 "name": "Nvme8", 00:28:03.282 "trtype": "tcp", 00:28:03.282 "traddr": "10.0.0.2", 00:28:03.282 "adrfam": "ipv4", 00:28:03.282 "trsvcid": "4420", 00:28:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:03.283 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:03.283 "hdgst": false, 00:28:03.283 "ddgst": false 00:28:03.283 }, 00:28:03.283 "method": "bdev_nvme_attach_controller" 00:28:03.283 },{ 00:28:03.283 "params": { 00:28:03.283 "name": "Nvme9", 00:28:03.283 "trtype": "tcp", 00:28:03.283 "traddr": "10.0.0.2", 00:28:03.283 "adrfam": "ipv4", 00:28:03.283 "trsvcid": "4420", 00:28:03.283 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:03.283 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:03.283 "hdgst": false, 00:28:03.283 "ddgst": false 00:28:03.283 }, 00:28:03.283 "method": "bdev_nvme_attach_controller" 00:28:03.283 },{ 00:28:03.283 "params": { 00:28:03.283 "name": "Nvme10", 00:28:03.283 "trtype": "tcp", 00:28:03.283 "traddr": "10.0.0.2", 00:28:03.283 "adrfam": "ipv4", 00:28:03.283 "trsvcid": "4420", 00:28:03.283 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:03.283 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:03.283 "hdgst": false, 00:28:03.283 "ddgst": false 00:28:03.283 }, 00:28:03.283 "method": "bdev_nvme_attach_controller" 00:28:03.283 }' 00:28:03.283 [2024-07-23 03:27:29.745589] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:03.283 [2024-07-23 03:27:29.745702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524305 ] 00:28:03.283 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.283 [2024-07-23 03:27:29.809434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.541 [2024-07-23 03:27:29.896133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.914 Running I/O for 10 seconds... 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.172 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.431 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:05.431 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:05.431 03:27:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 524305 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 524305 ']' 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 524305 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 524305 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 524305' 00:28:05.689 killing process with pid 524305 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 524305 00:28:05.689 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 524305 00:28:05.689 Received shutdown signal, test time was about 0.842550 seconds 00:28:05.689 00:28:05.689 Latency(us) 00:28:05.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.689 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.689 Verification LBA range: start 0x0 length 0x400 00:28:05.689 Nvme1n1 : 0.77 248.06 15.50 0.00 0.00 254463.24 20777.34 253211.69 00:28:05.689 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.689 Verification LBA range: start 0x0 length 0x400 00:28:05.689 Nvme2n1 : 0.80 239.14 14.95 0.00 0.00 257955.46 27962.03 245444.46 00:28:05.689 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.689 Verification LBA range: start 0x0 length 0x400 00:28:05.689 Nvme3n1 : 0.82 233.46 14.59 0.00 0.00 258304.88 21942.42 257872.02 00:28:05.690 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.690 Verification LBA range: start 0x0 length 0x400 00:28:05.690 Nvme4n1 : 0.79 244.11 15.26 0.00 0.00 240007.14 19029.71 246997.90 00:28:05.690 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.690 Verification LBA range: start 0x0 length 0x400 00:28:05.690 Nvme5n1 : 0.80 240.17 15.01 0.00 0.00 238332.84 18738.44 260978.92 00:28:05.690 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.690 Verification LBA range: start 0x0 length 0x400 00:28:05.690 Nvme6n1 : 0.84 228.11 14.26 0.00 0.00 234447.71 22816.24 264085.81 00:28:05.690 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.690 Verification LBA range: start 0x0 length 0x400 00:28:05.690 Nvme7n1 : 0.77 250.55 15.66 0.00 0.00 215120.40 18932.62 226803.11 00:28:05.690 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.690 Verification LBA range: start 0x0 length 0x400 00:28:05.690 Nvme8n1 : 0.83 232.68 14.54 0.00 0.00 228965.14 24758.04 262532.36 00:28:05.690 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.690 Verification LBA range: start 0x0 length 0x400 00:28:05.690 Nvme9n1 : 0.83 231.23 14.45 0.00 0.00 224946.25 22719.15 239230.67 00:28:05.690 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:05.690 Verification LBA range: start 0x0 length 0x400 00:28:05.690 Nvme10n1 : 0.77 165.90 10.37 0.00 0.00 299574.61 22622.06 288940.94 00:28:05.690 =================================================================================================================== 00:28:05.690 Total : 2313.41 144.59 0.00 0.00 243337.19 18738.44 288940.94 00:28:05.948 03:27:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:06.882 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 524173 00:28:06.882 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:06.882 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:06.882 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:06.882 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:06.882 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:06.882 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.882 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:07.139 rmmod nvme_tcp 00:28:07.139 rmmod nvme_fabrics 00:28:07.139 rmmod nvme_keyring 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 524173 ']' 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 524173 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 524173 ']' 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 524173 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 524173 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 524173' 00:28:07.139 killing process with pid 524173 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 524173 00:28:07.139 03:27:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 524173 00:28:07.706 03:27:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:07.706 03:27:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:07.706 03:27:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:07.706 03:27:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.706 03:27:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.706 03:27:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.706 03:27:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.706 03:27:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.611 00:28:09.611 real 0m7.466s 00:28:09.611 user 0m22.164s 00:28:09.611 sys 0m1.449s 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.611 ************************************ 00:28:09.611 END TEST nvmf_shutdown_tc2 00:28:09.611 ************************************ 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:09.611 ************************************ 00:28:09.611 START TEST nvmf_shutdown_tc3 00:28:09.611 ************************************ 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.611 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.612 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.612 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.612 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.612 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.612 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:28:09.875 00:28:09.875 --- 10.0.0.2 ping statistics --- 00:28:09.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.875 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:09.875 00:28:09.875 --- 10.0.0.1 ping statistics --- 00:28:09.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.875 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.875 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=525146 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 525146 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 525146 ']' 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:09.876 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:09.876 [2024-07-23 03:27:36.357505] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:09.876 [2024-07-23 03:27:36.357575] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.876 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.876 [2024-07-23 03:27:36.421153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.179 [2024-07-23 03:27:36.512396] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.179 [2024-07-23 03:27:36.512458] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.179 [2024-07-23 03:27:36.512471] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.179 [2024-07-23 03:27:36.512482] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.179 [2024-07-23 03:27:36.512492] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.179 [2024-07-23 03:27:36.512577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.179 [2024-07-23 03:27:36.512709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.179 [2024-07-23 03:27:36.512764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:10.179 [2024-07-23 03:27:36.512766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.179 [2024-07-23 03:27:36.675472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.179 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.180 03:27:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.439 Malloc1 00:28:10.439 [2024-07-23 03:27:36.764792] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.439 Malloc2 00:28:10.439 Malloc3 00:28:10.439 Malloc4 00:28:10.439 Malloc5 00:28:10.439 Malloc6 00:28:10.698 Malloc7 00:28:10.698 Malloc8 00:28:10.698 Malloc9 00:28:10.698 Malloc10 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=525327 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 525327 /var/tmp/bdevperf.sock 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 525327 ']' 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:10.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.698 { 00:28:10.698 "params": { 00:28:10.698 "name": "Nvme$subsystem", 00:28:10.698 "trtype": "$TEST_TRANSPORT", 00:28:10.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.698 "adrfam": "ipv4", 00:28:10.698 "trsvcid": "$NVMF_PORT", 00:28:10.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.698 "hdgst": ${hdgst:-false}, 00:28:10.698 "ddgst": ${ddgst:-false} 00:28:10.698 }, 00:28:10.698 "method": "bdev_nvme_attach_controller" 00:28:10.698 } 00:28:10.698 EOF 00:28:10.698 )") 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.698 { 00:28:10.698 "params": { 00:28:10.698 "name": "Nvme$subsystem", 00:28:10.698 "trtype": "$TEST_TRANSPORT", 00:28:10.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.698 "adrfam": "ipv4", 00:28:10.698 "trsvcid": "$NVMF_PORT", 00:28:10.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.698 "hdgst": ${hdgst:-false}, 00:28:10.698 "ddgst": ${ddgst:-false} 00:28:10.698 }, 00:28:10.698 "method": "bdev_nvme_attach_controller" 00:28:10.698 } 00:28:10.698 EOF 00:28:10.698 )") 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.698 { 00:28:10.698 "params": { 00:28:10.698 "name": "Nvme$subsystem", 00:28:10.698 "trtype": "$TEST_TRANSPORT", 00:28:10.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.698 "adrfam": "ipv4", 00:28:10.698 "trsvcid": "$NVMF_PORT", 00:28:10.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.698 "hdgst": ${hdgst:-false}, 00:28:10.698 "ddgst": ${ddgst:-false} 00:28:10.698 }, 00:28:10.698 "method": "bdev_nvme_attach_controller" 00:28:10.698 } 00:28:10.698 EOF 00:28:10.698 )") 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.698 { 00:28:10.698 "params": { 00:28:10.698 "name": "Nvme$subsystem", 00:28:10.698 "trtype": "$TEST_TRANSPORT", 00:28:10.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.698 "adrfam": "ipv4", 00:28:10.698 "trsvcid": "$NVMF_PORT", 00:28:10.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.698 "hdgst": ${hdgst:-false}, 00:28:10.698 "ddgst": ${ddgst:-false} 00:28:10.698 }, 00:28:10.698 "method": "bdev_nvme_attach_controller" 00:28:10.698 } 00:28:10.698 EOF 00:28:10.698 )") 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.698 { 00:28:10.698 "params": { 00:28:10.698 "name": "Nvme$subsystem", 00:28:10.698 "trtype": "$TEST_TRANSPORT", 00:28:10.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.698 "adrfam": "ipv4", 00:28:10.698 "trsvcid": "$NVMF_PORT", 00:28:10.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.698 "hdgst": ${hdgst:-false}, 00:28:10.698 "ddgst": ${ddgst:-false} 00:28:10.698 }, 00:28:10.698 "method": "bdev_nvme_attach_controller" 00:28:10.698 } 00:28:10.698 EOF 00:28:10.698 )") 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.698 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.698 { 00:28:10.698 "params": { 00:28:10.698 "name": "Nvme$subsystem", 00:28:10.699 "trtype": "$TEST_TRANSPORT", 00:28:10.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.699 "adrfam": "ipv4", 00:28:10.699 "trsvcid": "$NVMF_PORT", 00:28:10.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.699 "hdgst": ${hdgst:-false}, 00:28:10.699 "ddgst": ${ddgst:-false} 00:28:10.699 }, 00:28:10.699 "method": "bdev_nvme_attach_controller" 00:28:10.699 } 00:28:10.699 EOF 00:28:10.699 )") 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.699 { 00:28:10.699 "params": { 00:28:10.699 "name": "Nvme$subsystem", 00:28:10.699 "trtype": "$TEST_TRANSPORT", 00:28:10.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.699 "adrfam": "ipv4", 00:28:10.699 "trsvcid": "$NVMF_PORT", 00:28:10.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.699 "hdgst": ${hdgst:-false}, 00:28:10.699 "ddgst": ${ddgst:-false} 00:28:10.699 }, 00:28:10.699 "method": "bdev_nvme_attach_controller" 00:28:10.699 } 00:28:10.699 EOF 00:28:10.699 )") 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.699 { 00:28:10.699 "params": { 00:28:10.699 "name": "Nvme$subsystem", 00:28:10.699 "trtype": "$TEST_TRANSPORT", 00:28:10.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.699 "adrfam": "ipv4", 00:28:10.699 "trsvcid": "$NVMF_PORT", 00:28:10.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.699 "hdgst": ${hdgst:-false}, 00:28:10.699 "ddgst": ${ddgst:-false} 00:28:10.699 }, 00:28:10.699 "method": "bdev_nvme_attach_controller" 00:28:10.699 } 00:28:10.699 EOF 00:28:10.699 )") 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.699 { 00:28:10.699 "params": { 00:28:10.699 "name": "Nvme$subsystem", 00:28:10.699 "trtype": "$TEST_TRANSPORT", 00:28:10.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.699 "adrfam": "ipv4", 00:28:10.699 "trsvcid": "$NVMF_PORT", 00:28:10.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.699 "hdgst": ${hdgst:-false}, 00:28:10.699 "ddgst": ${ddgst:-false} 00:28:10.699 }, 00:28:10.699 "method": "bdev_nvme_attach_controller" 00:28:10.699 } 00:28:10.699 EOF 00:28:10.699 )") 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.699 { 00:28:10.699 "params": { 00:28:10.699 "name": "Nvme$subsystem", 00:28:10.699 "trtype": "$TEST_TRANSPORT", 00:28:10.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.699 "adrfam": "ipv4", 00:28:10.699 "trsvcid": "$NVMF_PORT", 00:28:10.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.699 "hdgst": ${hdgst:-false}, 00:28:10.699 "ddgst": ${ddgst:-false} 00:28:10.699 }, 00:28:10.699 "method": "bdev_nvme_attach_controller" 00:28:10.699 } 00:28:10.699 EOF 00:28:10.699 )") 00:28:10.699 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:10.959 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:10.959 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:10.959 03:27:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme1", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme2", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme3", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme4", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme5", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme6", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme7", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme8", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme9", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 },{ 00:28:10.959 "params": { 00:28:10.959 "name": "Nvme10", 00:28:10.959 "trtype": "tcp", 00:28:10.959 "traddr": "10.0.0.2", 00:28:10.959 "adrfam": "ipv4", 00:28:10.959 "trsvcid": "4420", 00:28:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:10.959 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:10.959 "hdgst": false, 00:28:10.959 "ddgst": false 00:28:10.959 }, 00:28:10.959 "method": "bdev_nvme_attach_controller" 00:28:10.959 }' 00:28:10.959 [2024-07-23 03:27:37.285584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:10.959 [2024-07-23 03:27:37.285684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525327 ] 00:28:10.959 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.959 [2024-07-23 03:27:37.348778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.959 [2024-07-23 03:27:37.434968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.862 Running I/O for 10 seconds... 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:13.122 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:13.381 03:27:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 525146 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 525146 ']' 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 525146 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 525146 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 525146' 00:28:13.655 killing process with pid 525146 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 525146 00:28:13.655 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 525146 00:28:13.655 [2024-07-23 03:27:40.102833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.655 [2024-07-23 03:27:40.102940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.102966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.102979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.102991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.103732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904560 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.656 [2024-07-23 03:27:40.105266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.105859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74b700 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.107999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.108016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.108030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.108043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.108055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.108068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.108080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.108093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.657 [2024-07-23 03:27:40.108105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.658 [2024-07-23 03:27:40.108385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.658 [2024-07-23 03:27:40.108411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.658 [2024-07-23 03:27:40.108425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.658 [2024-07-23 03:27:40.108438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-23 03:27:40.108451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:13.658 he state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 03:27:40.108465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.658 he state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904a00 is same with t[2024-07-23 03:27:40.108480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:28:13.658 id:0 cdw10:00000000 cdw11:00000000 00:28:13.658 [2024-07-23 03:27:40.108495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.658 [2024-07-23 03:27:40.108509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b34fd0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.108621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.658 [2024-07-23 03:27:40.108643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.658 [2024-07-23 03:27:40.108658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.658 [2024-07-23 03:27:40.108672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.658 [2024-07-23 03:27:40.108686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.658 [2024-07-23 03:27:40.108699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.658 [2024-07-23 03:27:40.108712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.658 [2024-07-23 03:27:40.108731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.658 [2024-07-23 03:27:40.108745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc300 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.110991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.658 [2024-07-23 03:27:40.111362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.111828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904ea0 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.113822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905360 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.114993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.659 [2024-07-23 03:27:40.115321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.115333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.115345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.115357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.115369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.115382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.115393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.115405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.115417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905800 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.116996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.660 [2024-07-23 03:27:40.117229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.117338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905cc0 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.118988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.119572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906160 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.661 [2024-07-23 03:27:40.120796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.120995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.121491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906600 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.122266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906aa0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.122291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906aa0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.122305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906aa0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.122317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906aa0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.122329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906aa0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.122340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906aa0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.122352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906aa0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.122364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906aa0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.128133] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:13.662 [2024-07-23 03:27:40.128241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b34fd0 (9): Bad file descriptor 00:28:13.662 [2024-07-23 03:27:40.128331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.662 [2024-07-23 03:27:40.128374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.662 [2024-07-23 03:27:40.128402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.662 [2024-07-23 03:27:40.128429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.662 [2024-07-23 03:27:40.128455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9eec0 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.128502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.662 [2024-07-23 03:27:40.128538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.662 [2024-07-23 03:27:40.128566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.662 [2024-07-23 03:27:40.128593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.662 [2024-07-23 03:27:40.128639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca0a10 is same with the state(5) to be set 00:28:13.662 [2024-07-23 03:27:40.128688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.662 [2024-07-23 03:27:40.128708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.128723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.128737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.128751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.128765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.128779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.128793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.128810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0af90 is same with the state(5) to be set 00:28:13.663 [2024-07-23 03:27:40.128859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.128880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.128895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.128909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.128923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.128936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.128950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.128973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.128985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ada190 is same with the state(5) to be set 00:28:13.663 [2024-07-23 03:27:40.129012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc300 (9): Bad file descriptor 00:28:13.663 [2024-07-23 03:27:40.129060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b07810 is same with the state(5) to be set 00:28:13.663 [2024-07-23 03:27:40.129221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff6b0 is same with the state(5) to be set 00:28:13.663 [2024-07-23 03:27:40.129390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c81040 is same with the state(5) to be set 00:28:13.663 [2024-07-23 03:27:40.129565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.663 [2024-07-23 03:27:40.129691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.129704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c800f0 is same with the state(5) to be set 00:28:13.663 [2024-07-23 03:27:40.129789] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:13.663 [2024-07-23 03:27:40.130069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.663 [2024-07-23 03:27:40.130455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.663 [2024-07-23 03:27:40.130469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.130973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.130989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.664 [2024-07-23 03:27:40.131633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.664 [2024-07-23 03:27:40.131649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.131983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.131996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132122] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c71c90 was disconnected and freed. reset controller. 00:28:13.665 [2024-07-23 03:27:40.132544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.132985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.132999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.133014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.133028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.133043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.133057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.133072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.133086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.133102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.133115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.133131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.133145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.133161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.133174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.133189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.665 [2024-07-23 03:27:40.133203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.665 [2024-07-23 03:27:40.133218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.133979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.133993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.666 [2024-07-23 03:27:40.134364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.666 [2024-07-23 03:27:40.134377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.134393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.134406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.134422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.134435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.134454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.134468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.134484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.134497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.134581] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ad8460 was disconnected and freed. reset controller. 00:28:13.667 [2024-07-23 03:27:40.134777] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:13.667 [2024-07-23 03:27:40.137441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:13.667 [2024-07-23 03:27:40.137490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0af90 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.137965] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:13.667 [2024-07-23 03:27:40.138011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c800f0 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.139310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.667 [2024-07-23 03:27:40.139343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0af90 with addr=10.0.0.2, port=4420 00:28:13.667 [2024-07-23 03:27:40.139368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0af90 is same with the state(5) to be set 00:28:13.667 [2024-07-23 03:27:40.139414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9eec0 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.139447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca0a10 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.139482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ada190 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.139519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b07810 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.139550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff6b0 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.139580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c81040 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.139706] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:13.667 [2024-07-23 03:27:40.139797] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:13.667 [2024-07-23 03:27:40.139907] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:13.667 [2024-07-23 03:27:40.140004] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:13.667 [2024-07-23 03:27:40.140103] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:13.667 [2024-07-23 03:27:40.140324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.667 [2024-07-23 03:27:40.140354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c800f0 with addr=10.0.0.2, port=4420 00:28:13.667 [2024-07-23 03:27:40.140371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c800f0 is same with the state(5) to be set 00:28:13.667 [2024-07-23 03:27:40.140392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0af90 (9): Bad file descriptor 00:28:13.667 [2024-07-23 03:27:40.140458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.140981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.140998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.667 [2024-07-23 03:27:40.141248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.667 [2024-07-23 03:27:40.141263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.141981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.141997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.668 [2024-07-23 03:27:40.142427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.668 [2024-07-23 03:27:40.142442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6fee0 is same with the state(5) to be set 00:28:13.668 [2024-07-23 03:27:40.143862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.143894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.143929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.143945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.143962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.143976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.143992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.669 [2024-07-23 03:27:40.144937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.669 [2024-07-23 03:27:40.144952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.144973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.144988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.670 [2024-07-23 03:27:40.145872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.670 [2024-07-23 03:27:40.145887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68890 is same with the state(5) to be set 00:28:13.670 [2024-07-23 03:27:40.147933] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.670 [2024-07-23 03:27:40.147966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:13.670 [2024-07-23 03:27:40.148021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c800f0 (9): Bad file descriptor 00:28:13.670 [2024-07-23 03:27:40.148045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:13.670 [2024-07-23 03:27:40.148059] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:13.670 [2024-07-23 03:27:40.148075] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:13.670 [2024-07-23 03:27:40.148192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.670 [2024-07-23 03:27:40.148392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.670 [2024-07-23 03:27:40.148421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc300 with addr=10.0.0.2, port=4420 00:28:13.670 [2024-07-23 03:27:40.148439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc300 is same with the state(5) to be set 00:28:13.670 [2024-07-23 03:27:40.148574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.670 [2024-07-23 03:27:40.148612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b34fd0 with addr=10.0.0.2, port=4420 00:28:13.670 [2024-07-23 03:27:40.148637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b34fd0 is same with the state(5) to be set 00:28:13.670 [2024-07-23 03:27:40.148652] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:13.670 [2024-07-23 03:27:40.148665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:13.670 [2024-07-23 03:27:40.148678] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:13.670 [2024-07-23 03:27:40.149280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.670 [2024-07-23 03:27:40.149327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc300 (9): Bad file descriptor 00:28:13.670 [2024-07-23 03:27:40.149349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b34fd0 (9): Bad file descriptor 00:28:13.670 [2024-07-23 03:27:40.149459] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:13.670 [2024-07-23 03:27:40.149515] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:13.671 [2024-07-23 03:27:40.149534] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:13.671 [2024-07-23 03:27:40.149549] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:13.671 [2024-07-23 03:27:40.149567] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:13.671 [2024-07-23 03:27:40.149587] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:13.671 [2024-07-23 03:27:40.149610] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:13.671 [2024-07-23 03:27:40.149701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.149976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.149992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.671 [2024-07-23 03:27:40.150811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.671 [2024-07-23 03:27:40.150829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.150846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.150860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.150876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.150890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.150906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.150920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.150935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.150950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.150965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.150979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.150994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.151647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.151662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18500 is same with the state(5) to be set 00:28:13.672 [2024-07-23 03:27:40.152902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.152934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.152955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.152971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.152987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.672 [2024-07-23 03:27:40.153264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.672 [2024-07-23 03:27:40.153278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.153970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.153989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.673 [2024-07-23 03:27:40.154450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.673 [2024-07-23 03:27:40.154466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.154844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.154859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c706e0 is same with the state(5) to be set 00:28:13.674 [2024-07-23 03:27:40.156102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.674 [2024-07-23 03:27:40.156693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.674 [2024-07-23 03:27:40.156707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.156983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.156998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.675 [2024-07-23 03:27:40.157896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.675 [2024-07-23 03:27:40.157909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.157932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.157947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.157963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.157977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.157992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.158006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.158021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.158035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.158058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.158072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.158087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.158101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.158116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c731b0 is same with the state(5) to be set 00:28:13.676 [2024-07-23 03:27:40.159357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.159985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.159999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.676 [2024-07-23 03:27:40.160327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.676 [2024-07-23 03:27:40.160341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.160983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.160999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.161383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.161398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad4680 is same with the state(5) to be set 00:28:13.677 [2024-07-23 03:27:40.162701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.162725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.162746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.162761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.162778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.162794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.162810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.162824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.162845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.162860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.677 [2024-07-23 03:27:40.162876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.677 [2024-07-23 03:27:40.162890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.162916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.162929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.162945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.162959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.162975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.162989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.678 [2024-07-23 03:27:40.163632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.678 [2024-07-23 03:27:40.163651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.163984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.163999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.679 [2024-07-23 03:27:40.164316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.679 [2024-07-23 03:27:40.164330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.164707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.164721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad5ba0 is same with the state(5) to be set 00:28:13.680 [2024-07-23 03:27:40.166011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.680 [2024-07-23 03:27:40.166814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.680 [2024-07-23 03:27:40.166829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.166846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.166860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.166876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.166894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.166918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.166932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.166948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.166962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.166978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.166992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.167971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.167985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.168001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.681 [2024-07-23 03:27:40.168014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.681 [2024-07-23 03:27:40.168029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad70c0 is same with the state(5) to be set 00:28:13.681 [2024-07-23 03:27:40.170095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.681 [2024-07-23 03:27:40.170127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.681 [2024-07-23 03:27:40.170155] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:13.682 [2024-07-23 03:27:40.170178] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:13.682 [2024-07-23 03:27:40.170518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.682 [2024-07-23 03:27:40.170547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0af90 with addr=10.0.0.2, port=4420 00:28:13.682 [2024-07-23 03:27:40.170564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0af90 is same with the state(5) to be set 00:28:13.682 [2024-07-23 03:27:40.170647] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.170672] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.170698] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.170719] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.170738] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.170766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0af90 (9): Bad file descriptor 00:28:13.682 [2024-07-23 03:27:40.170863] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:13.682 [2024-07-23 03:27:40.170888] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:13.682 [2024-07-23 03:27:40.170905] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:13.682 task offset: 26752 on job bdev=Nvme4n1 fails 00:28:13.682 00:28:13.682 Latency(us) 00:28:13.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.682 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme1n1 ended in about 0.91 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme1n1 : 0.91 140.43 8.78 70.21 0.00 300509.11 24758.04 256318.58 00:28:13.682 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme2n1 ended in about 0.92 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme2n1 : 0.92 139.03 8.69 69.51 0.00 297401.46 20583.16 302921.96 00:28:13.682 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme3n1 ended in about 0.92 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme3n1 : 0.92 138.55 8.66 69.28 0.00 292294.86 23010.42 302921.96 00:28:13.682 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme4n1 ended in about 0.90 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme4n1 : 0.90 212.41 13.28 70.80 0.00 209603.13 19418.07 242337.56 00:28:13.682 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme5n1 ended in about 0.93 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme5n1 : 0.93 138.07 8.63 69.03 0.00 281210.63 22816.24 287387.50 00:28:13.682 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme6n1 ended in about 0.93 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme6n1 : 0.93 137.57 8.60 68.79 0.00 276195.05 20291.89 262532.36 00:28:13.682 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme7n1 ended in about 0.93 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme7n1 : 0.93 205.63 12.85 68.54 0.00 203426.51 17282.09 267192.70 00:28:13.682 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme8n1 ended in about 0.94 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme8n1 : 0.94 136.61 8.54 68.30 0.00 266524.70 22524.97 256318.58 00:28:13.682 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme9n1 ended in about 0.91 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme9n1 : 0.91 212.08 13.26 70.69 0.00 187529.01 8543.95 259425.47 00:28:13.682 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:13.682 Job: Nvme10n1 ended in about 0.91 seconds with error 00:28:13.682 Verification LBA range: start 0x0 length 0x400 00:28:13.682 Nvme10n1 : 0.91 139.91 8.74 69.95 0.00 247413.32 22816.24 259425.47 00:28:13.682 =================================================================================================================== 00:28:13.682 Total : 1600.29 100.02 695.13 0.00 251117.63 8543.95 302921.96 00:28:13.682 [2024-07-23 03:27:40.198774] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:13.682 [2024-07-23 03:27:40.198864] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:13.682 [2024-07-23 03:27:40.198899] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:13.682 [2024-07-23 03:27:40.199256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.682 [2024-07-23 03:27:40.199291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ada190 with addr=10.0.0.2, port=4420 00:28:13.682 [2024-07-23 03:27:40.199312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ada190 is same with the state(5) to be set 00:28:13.682 [2024-07-23 03:27:40.199467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.682 [2024-07-23 03:27:40.199494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b07810 with addr=10.0.0.2, port=4420 00:28:13.682 [2024-07-23 03:27:40.199511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b07810 is same with the state(5) to be set 00:28:13.682 [2024-07-23 03:27:40.201303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.682 [2024-07-23 03:27:40.201334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aff6b0 with addr=10.0.0.2, port=4420 00:28:13.682 [2024-07-23 03:27:40.201355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff6b0 is same with the state(5) to be set 00:28:13.682 [2024-07-23 03:27:40.201503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.682 [2024-07-23 03:27:40.201532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9eec0 with addr=10.0.0.2, port=4420 00:28:13.682 [2024-07-23 03:27:40.201549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9eec0 is same with the state(5) to be set 00:28:13.682 [2024-07-23 03:27:40.201695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.682 [2024-07-23 03:27:40.201721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca0a10 with addr=10.0.0.2, port=4420 00:28:13.682 [2024-07-23 03:27:40.201737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca0a10 is same with the state(5) to be set 00:28:13.682 [2024-07-23 03:27:40.201878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.682 [2024-07-23 03:27:40.201903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c81040 with addr=10.0.0.2, port=4420 00:28:13.682 [2024-07-23 03:27:40.201919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c81040 is same with the state(5) to be set 00:28:13.682 [2024-07-23 03:27:40.202076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.682 [2024-07-23 03:27:40.202102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c800f0 with addr=10.0.0.2, port=4420 00:28:13.682 [2024-07-23 03:27:40.202118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c800f0 is same with the state(5) to be set 00:28:13.682 [2024-07-23 03:27:40.202143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ada190 (9): Bad file descriptor 00:28:13.682 [2024-07-23 03:27:40.202165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b07810 (9): Bad file descriptor 00:28:13.682 [2024-07-23 03:27:40.202183] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:13.682 [2024-07-23 03:27:40.202196] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:13.682 [2024-07-23 03:27:40.202212] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:13.682 [2024-07-23 03:27:40.202262] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.202289] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.202316] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.202337] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.202356] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.682 [2024-07-23 03:27:40.202430] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:13.682 [2024-07-23 03:27:40.202455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.682 [2024-07-23 03:27:40.202486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.682 [2024-07-23 03:27:40.202520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff6b0 (9): Bad file descriptor 00:28:13.682 [2024-07-23 03:27:40.202543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9eec0 (9): Bad file descriptor 00:28:13.682 [2024-07-23 03:27:40.202561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca0a10 (9): Bad file descriptor 00:28:13.682 [2024-07-23 03:27:40.202578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c81040 (9): Bad file descriptor 00:28:13.682 [2024-07-23 03:27:40.202595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c800f0 (9): Bad file descriptor 00:28:13.682 [2024-07-23 03:27:40.202610] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:13.682 [2024-07-23 03:27:40.202632] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:13.682 [2024-07-23 03:27:40.202646] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:13.682 [2024-07-23 03:27:40.202664] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:13.682 [2024-07-23 03:27:40.202678] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:13.682 [2024-07-23 03:27:40.202691] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:13.683 [2024-07-23 03:27:40.202781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.683 [2024-07-23 03:27:40.202802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.683 [2024-07-23 03:27:40.202976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.683 [2024-07-23 03:27:40.203008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b34fd0 with addr=10.0.0.2, port=4420 00:28:13.683 [2024-07-23 03:27:40.203025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b34fd0 is same with the state(5) to be set 00:28:13.683 [2024-07-23 03:27:40.203168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.683 [2024-07-23 03:27:40.203194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc300 with addr=10.0.0.2, port=4420 00:28:13.683 [2024-07-23 03:27:40.203210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc300 is same with the state(5) to be set 00:28:13.683 [2024-07-23 03:27:40.203225] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:13.683 [2024-07-23 03:27:40.203238] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:13.683 [2024-07-23 03:27:40.203251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:13.683 [2024-07-23 03:27:40.203268] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:13.683 [2024-07-23 03:27:40.203283] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:13.683 [2024-07-23 03:27:40.203296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:13.683 [2024-07-23 03:27:40.203311] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:13.683 [2024-07-23 03:27:40.203325] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:13.683 [2024-07-23 03:27:40.203338] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:13.683 [2024-07-23 03:27:40.203354] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:13.683 [2024-07-23 03:27:40.203368] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:13.683 [2024-07-23 03:27:40.203381] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:13.683 [2024-07-23 03:27:40.203397] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:13.683 [2024-07-23 03:27:40.203411] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:13.683 [2024-07-23 03:27:40.203423] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:13.683 [2024-07-23 03:27:40.203461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.683 [2024-07-23 03:27:40.203479] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.683 [2024-07-23 03:27:40.203490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.683 [2024-07-23 03:27:40.203501] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.683 [2024-07-23 03:27:40.203512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.683 [2024-07-23 03:27:40.203528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b34fd0 (9): Bad file descriptor 00:28:13.683 [2024-07-23 03:27:40.203546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc300 (9): Bad file descriptor 00:28:13.683 [2024-07-23 03:27:40.203586] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:13.683 [2024-07-23 03:27:40.203603] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:13.683 [2024-07-23 03:27:40.203624] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:13.683 [2024-07-23 03:27:40.203647] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:13.683 [2024-07-23 03:27:40.203662] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:13.683 [2024-07-23 03:27:40.203675] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:13.683 [2024-07-23 03:27:40.203713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.683 [2024-07-23 03:27:40.203731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.252 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:14.252 03:27:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 525327 00:28:15.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (525327) - No such process 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.190 rmmod nvme_tcp 00:28:15.190 rmmod nvme_fabrics 00:28:15.190 rmmod nvme_keyring 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.190 03:27:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.729 03:27:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.729 00:28:17.729 real 0m7.597s 00:28:17.729 user 0m18.912s 00:28:17.729 sys 0m1.476s 00:28:17.729 03:27:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:17.729 03:27:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.729 ************************************ 00:28:17.729 END TEST nvmf_shutdown_tc3 00:28:17.729 ************************************ 00:28:17.729 03:27:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:17.729 00:28:17.729 real 0m27.174s 00:28:17.729 user 1m16.040s 00:28:17.729 sys 0m6.184s 00:28:17.729 03:27:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:17.729 03:27:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.729 ************************************ 00:28:17.729 END TEST nvmf_shutdown 00:28:17.729 ************************************ 00:28:17.729 03:27:43 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:17.729 03:27:43 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.729 03:27:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.729 03:27:43 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:17.729 03:27:43 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:17.729 03:27:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.729 03:27:43 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:17.730 03:27:43 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:17.730 03:27:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:17.730 03:27:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:17.730 03:27:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.730 ************************************ 00:28:17.730 START TEST nvmf_multicontroller 00:28:17.730 ************************************ 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:17.730 * Looking for test storage... 00:28:17.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.730 03:27:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:19.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:19.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:19.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:19.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.635 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:28:19.636 00:28:19.636 --- 10.0.0.2 ping statistics --- 00:28:19.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.636 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:28:19.636 00:28:19.636 --- 10.0.0.1 ping statistics --- 00:28:19.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.636 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=527837 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 527837 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 527837 ']' 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:19.636 03:27:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.636 [2024-07-23 03:27:46.021188] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:19.636 [2024-07-23 03:27:46.021277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.636 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.636 [2024-07-23 03:27:46.085414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:19.636 [2024-07-23 03:27:46.169399] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.636 [2024-07-23 03:27:46.169453] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.636 [2024-07-23 03:27:46.169477] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.636 [2024-07-23 03:27:46.169487] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.636 [2024-07-23 03:27:46.169498] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.636 [2024-07-23 03:27:46.169578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.636 [2024-07-23 03:27:46.169703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.636 [2024-07-23 03:27:46.169707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 [2024-07-23 03:27:46.310072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 Malloc0 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 [2024-07-23 03:27:46.376701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 [2024-07-23 03:27:46.384583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 Malloc1 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=527870 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 527870 /var/tmp/bdevperf.sock 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 527870 ']' 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:19.895 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.462 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:20.462 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:20.462 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:20.462 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.462 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.462 NVMe0n1 00:28:20.462 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.462 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:20.462 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.463 1 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.463 request: 00:28:20.463 { 00:28:20.463 "name": "NVMe0", 00:28:20.463 "trtype": "tcp", 00:28:20.463 "traddr": "10.0.0.2", 00:28:20.463 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:20.463 "hostaddr": "10.0.0.2", 00:28:20.463 "hostsvcid": "60000", 00:28:20.463 "adrfam": "ipv4", 00:28:20.463 "trsvcid": "4420", 00:28:20.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.463 "method": "bdev_nvme_attach_controller", 00:28:20.463 "req_id": 1 00:28:20.463 } 00:28:20.463 Got JSON-RPC error response 00:28:20.463 response: 00:28:20.463 { 00:28:20.463 "code": -114, 00:28:20.463 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:20.463 } 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.463 request: 00:28:20.463 { 00:28:20.463 "name": "NVMe0", 00:28:20.463 "trtype": "tcp", 00:28:20.463 "traddr": "10.0.0.2", 00:28:20.463 "hostaddr": "10.0.0.2", 00:28:20.463 "hostsvcid": "60000", 00:28:20.463 "adrfam": "ipv4", 00:28:20.463 "trsvcid": "4420", 00:28:20.463 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:20.463 "method": "bdev_nvme_attach_controller", 00:28:20.463 "req_id": 1 00:28:20.463 } 00:28:20.463 Got JSON-RPC error response 00:28:20.463 response: 00:28:20.463 { 00:28:20.463 "code": -114, 00:28:20.463 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:20.463 } 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.463 request: 00:28:20.463 { 00:28:20.463 "name": "NVMe0", 00:28:20.463 "trtype": "tcp", 00:28:20.463 "traddr": "10.0.0.2", 00:28:20.463 "hostaddr": "10.0.0.2", 00:28:20.463 "hostsvcid": "60000", 00:28:20.463 "adrfam": "ipv4", 00:28:20.463 "trsvcid": "4420", 00:28:20.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.463 "multipath": "disable", 00:28:20.463 "method": "bdev_nvme_attach_controller", 00:28:20.463 "req_id": 1 00:28:20.463 } 00:28:20.463 Got JSON-RPC error response 00:28:20.463 response: 00:28:20.463 { 00:28:20.463 "code": -114, 00:28:20.463 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:20.463 } 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.463 request: 00:28:20.463 { 00:28:20.463 "name": "NVMe0", 00:28:20.463 "trtype": "tcp", 00:28:20.463 "traddr": "10.0.0.2", 00:28:20.463 "hostaddr": "10.0.0.2", 00:28:20.463 "hostsvcid": "60000", 00:28:20.463 "adrfam": "ipv4", 00:28:20.463 "trsvcid": "4420", 00:28:20.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.463 "multipath": "failover", 00:28:20.463 "method": "bdev_nvme_attach_controller", 00:28:20.463 "req_id": 1 00:28:20.463 } 00:28:20.463 Got JSON-RPC error response 00:28:20.463 response: 00:28:20.463 { 00:28:20.463 "code": -114, 00:28:20.463 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:20.463 } 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.463 03:27:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.722 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.722 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:20.722 03:27:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:22.096 0 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 527870 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 527870 ']' 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 527870 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 527870 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 527870' 00:28:22.096 killing process with pid 527870 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 527870 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 527870 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:22.096 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:22.096 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:22.096 [2024-07-23 03:27:46.488427] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:22.096 [2024-07-23 03:27:46.488526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527870 ] 00:28:22.096 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.096 [2024-07-23 03:27:46.548865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.096 [2024-07-23 03:27:46.637409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.096 [2024-07-23 03:27:47.175299] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 7b668b24-c787-4aca-8fad-7b395f4976b1 already exists 00:28:22.096 [2024-07-23 03:27:47.175343] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:7b668b24-c787-4aca-8fad-7b395f4976b1 alias for bdev NVMe1n1 00:28:22.096 [2024-07-23 03:27:47.175360] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:22.096 Running I/O for 1 seconds... 00:28:22.096 00:28:22.096 Latency(us) 00:28:22.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.097 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:22.097 NVMe0n1 : 1.00 19667.21 76.83 0.00 0.00 6498.75 4004.98 13981.01 00:28:22.097 =================================================================================================================== 00:28:22.097 Total : 19667.21 76.83 0.00 0.00 6498.75 4004.98 13981.01 00:28:22.097 Received shutdown signal, test time was about 1.000000 seconds 00:28:22.097 00:28:22.097 Latency(us) 00:28:22.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.097 =================================================================================================================== 00:28:22.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.097 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:22.097 rmmod nvme_tcp 00:28:22.097 rmmod nvme_fabrics 00:28:22.097 rmmod nvme_keyring 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 527837 ']' 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 527837 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 527837 ']' 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 527837 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:22.097 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 527837 00:28:22.356 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:22.356 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:22.356 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 527837' 00:28:22.356 killing process with pid 527837 00:28:22.356 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 527837 00:28:22.356 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 527837 00:28:22.617 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:22.617 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:22.617 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:22.617 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.617 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.617 03:27:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.617 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.617 03:27:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.524 03:27:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:24.524 00:28:24.524 real 0m7.196s 00:28:24.524 user 0m11.034s 00:28:24.524 sys 0m2.271s 00:28:24.524 03:27:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:24.524 03:27:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.524 ************************************ 00:28:24.524 END TEST nvmf_multicontroller 00:28:24.524 ************************************ 00:28:24.524 03:27:51 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:24.524 03:27:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:24.524 03:27:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:24.524 03:27:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.524 ************************************ 00:28:24.524 START TEST nvmf_aer 00:28:24.524 ************************************ 00:28:24.524 03:27:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:24.784 * Looking for test storage... 00:28:24.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:24.784 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:24.785 03:27:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:26.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:26.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:26.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:26.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.693 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:26.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:28:26.694 00:28:26.694 --- 10.0.0.2 ping statistics --- 00:28:26.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.694 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:28:26.694 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:28:26.694 00:28:26.694 --- 10.0.0.1 ping statistics --- 00:28:26.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.694 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=530068 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 530068 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 530068 ']' 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.956 03:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:26.957 03:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.957 03:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:26.957 03:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:26.957 [2024-07-23 03:27:53.347047] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:26.957 [2024-07-23 03:27:53.347121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.957 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.957 [2024-07-23 03:27:53.422268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.957 [2024-07-23 03:27:53.514326] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.957 [2024-07-23 03:27:53.514379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.957 [2024-07-23 03:27:53.514403] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.957 [2024-07-23 03:27:53.514417] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.957 [2024-07-23 03:27:53.514429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.957 [2024-07-23 03:27:53.514514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.957 [2024-07-23 03:27:53.514570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.957 [2024-07-23 03:27:53.514685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.957 [2024-07-23 03:27:53.514689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 [2024-07-23 03:27:54.292364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 Malloc0 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 [2024-07-23 03:27:54.343504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 [ 00:28:27.933 { 00:28:27.933 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:27.933 "subtype": "Discovery", 00:28:27.933 "listen_addresses": [], 00:28:27.933 "allow_any_host": true, 00:28:27.933 "hosts": [] 00:28:27.933 }, 00:28:27.933 { 00:28:27.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.933 "subtype": "NVMe", 00:28:27.933 "listen_addresses": [ 00:28:27.933 { 00:28:27.933 "trtype": "TCP", 00:28:27.933 "adrfam": "IPv4", 00:28:27.933 "traddr": "10.0.0.2", 00:28:27.933 "trsvcid": "4420" 00:28:27.933 } 00:28:27.933 ], 00:28:27.933 "allow_any_host": true, 00:28:27.933 "hosts": [], 00:28:27.933 "serial_number": "SPDK00000000000001", 00:28:27.933 "model_number": "SPDK bdev Controller", 00:28:27.933 "max_namespaces": 2, 00:28:27.933 "min_cntlid": 1, 00:28:27.933 "max_cntlid": 65519, 00:28:27.933 "namespaces": [ 00:28:27.933 { 00:28:27.933 "nsid": 1, 00:28:27.933 "bdev_name": "Malloc0", 00:28:27.933 "name": "Malloc0", 00:28:27.933 "nguid": "7B39A35D6CB9447387B78BD4B76DF2A7", 00:28:27.933 "uuid": "7b39a35d-6cb9-4473-87b7-8bd4b76df2a7" 00:28:27.933 } 00:28:27.933 ] 00:28:27.933 } 00:28:27.933 ] 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:27.933 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=530227 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:27.934 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:27.934 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.192 Malloc1 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.192 Asynchronous Event Request test 00:28:28.192 Attaching to 10.0.0.2 00:28:28.192 Attached to 10.0.0.2 00:28:28.192 Registering asynchronous event callbacks... 00:28:28.192 Starting namespace attribute notice tests for all controllers... 00:28:28.192 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:28.192 aer_cb - Changed Namespace 00:28:28.192 Cleaning up... 00:28:28.192 [ 00:28:28.192 { 00:28:28.192 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:28.192 "subtype": "Discovery", 00:28:28.192 "listen_addresses": [], 00:28:28.192 "allow_any_host": true, 00:28:28.192 "hosts": [] 00:28:28.192 }, 00:28:28.192 { 00:28:28.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.192 "subtype": "NVMe", 00:28:28.192 "listen_addresses": [ 00:28:28.192 { 00:28:28.192 "trtype": "TCP", 00:28:28.192 "adrfam": "IPv4", 00:28:28.192 "traddr": "10.0.0.2", 00:28:28.192 "trsvcid": "4420" 00:28:28.192 } 00:28:28.192 ], 00:28:28.192 "allow_any_host": true, 00:28:28.192 "hosts": [], 00:28:28.192 "serial_number": "SPDK00000000000001", 00:28:28.192 "model_number": "SPDK bdev Controller", 00:28:28.192 "max_namespaces": 2, 00:28:28.192 "min_cntlid": 1, 00:28:28.192 "max_cntlid": 65519, 00:28:28.192 "namespaces": [ 00:28:28.192 { 00:28:28.192 "nsid": 1, 00:28:28.192 "bdev_name": "Malloc0", 00:28:28.192 "name": "Malloc0", 00:28:28.192 "nguid": "7B39A35D6CB9447387B78BD4B76DF2A7", 00:28:28.192 "uuid": "7b39a35d-6cb9-4473-87b7-8bd4b76df2a7" 00:28:28.192 }, 00:28:28.192 { 00:28:28.192 "nsid": 2, 00:28:28.192 "bdev_name": "Malloc1", 00:28:28.192 "name": "Malloc1", 00:28:28.192 "nguid": "64C11AB4B4754CD3B153E40E50A7659F", 00:28:28.192 "uuid": "64c11ab4-b475-4cd3-b153-e40e50a7659f" 00:28:28.192 } 00:28:28.192 ] 00:28:28.192 } 00:28:28.192 ] 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 530227 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:28.192 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:28.193 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:28.193 rmmod nvme_tcp 00:28:28.193 rmmod nvme_fabrics 00:28:28.451 rmmod nvme_keyring 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 530068 ']' 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 530068 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 530068 ']' 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 530068 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 530068 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 530068' 00:28:28.451 killing process with pid 530068 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 530068 00:28:28.451 03:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 530068 00:28:28.711 03:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:28.711 03:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:28.711 03:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:28.711 03:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.711 03:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:28.711 03:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.711 03:27:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.711 03:27:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.614 03:27:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:30.614 00:28:30.614 real 0m6.036s 00:28:30.614 user 0m6.947s 00:28:30.614 sys 0m1.947s 00:28:30.614 03:27:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:30.614 03:27:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:30.614 ************************************ 00:28:30.614 END TEST nvmf_aer 00:28:30.614 ************************************ 00:28:30.614 03:27:57 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:30.614 03:27:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:30.614 03:27:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:30.614 03:27:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:30.615 ************************************ 00:28:30.615 START TEST nvmf_async_init 00:28:30.615 ************************************ 00:28:30.615 03:27:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:30.873 * Looking for test storage... 00:28:30.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.873 03:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e5dbf74d90934724b150c2a917a53b63 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.874 03:27:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:32.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.777 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:32.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:28:32.778 00:28:32.778 --- 10.0.0.2 ping statistics --- 00:28:32.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.778 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:28:32.778 00:28:32.778 --- 10.0.0.1 ping statistics --- 00:28:32.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.778 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=532161 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 532161 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 532161 ']' 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:32.778 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.037 [2024-07-23 03:27:59.361422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:33.037 [2024-07-23 03:27:59.361496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.037 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.037 [2024-07-23 03:27:59.433872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.037 [2024-07-23 03:27:59.527787] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.037 [2024-07-23 03:27:59.527850] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.037 [2024-07-23 03:27:59.527867] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.037 [2024-07-23 03:27:59.527880] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.037 [2024-07-23 03:27:59.527892] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.037 [2024-07-23 03:27:59.527922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.295 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 [2024-07-23 03:27:59.671194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 null0 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e5dbf74d90934724b150c2a917a53b63 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 [2024-07-23 03:27:59.711428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.296 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.554 nvme0n1 00:28:33.554 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.554 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:33.554 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.554 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.554 [ 00:28:33.554 { 00:28:33.554 "name": "nvme0n1", 00:28:33.554 "aliases": [ 00:28:33.554 "e5dbf74d-9093-4724-b150-c2a917a53b63" 00:28:33.554 ], 00:28:33.554 "product_name": "NVMe disk", 00:28:33.554 "block_size": 512, 00:28:33.554 "num_blocks": 2097152, 00:28:33.554 "uuid": "e5dbf74d-9093-4724-b150-c2a917a53b63", 00:28:33.554 "assigned_rate_limits": { 00:28:33.554 "rw_ios_per_sec": 0, 00:28:33.554 "rw_mbytes_per_sec": 0, 00:28:33.554 "r_mbytes_per_sec": 0, 00:28:33.554 "w_mbytes_per_sec": 0 00:28:33.554 }, 00:28:33.554 "claimed": false, 00:28:33.554 "zoned": false, 00:28:33.554 "supported_io_types": { 00:28:33.554 "read": true, 00:28:33.554 "write": true, 00:28:33.554 "unmap": false, 00:28:33.554 "write_zeroes": true, 00:28:33.554 "flush": true, 00:28:33.554 "reset": true, 00:28:33.554 "compare": true, 00:28:33.554 "compare_and_write": true, 00:28:33.554 "abort": true, 00:28:33.554 "nvme_admin": true, 00:28:33.554 "nvme_io": true 00:28:33.554 }, 00:28:33.554 "memory_domains": [ 00:28:33.554 { 00:28:33.554 "dma_device_id": "system", 00:28:33.554 "dma_device_type": 1 00:28:33.554 } 00:28:33.554 ], 00:28:33.554 "driver_specific": { 00:28:33.554 "nvme": [ 00:28:33.554 { 00:28:33.554 "trid": { 00:28:33.554 "trtype": "TCP", 00:28:33.554 "adrfam": "IPv4", 00:28:33.554 "traddr": "10.0.0.2", 00:28:33.554 "trsvcid": "4420", 00:28:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:33.554 }, 00:28:33.554 "ctrlr_data": { 00:28:33.554 "cntlid": 1, 00:28:33.554 "vendor_id": "0x8086", 00:28:33.554 "model_number": "SPDK bdev Controller", 00:28:33.554 "serial_number": "00000000000000000000", 00:28:33.554 "firmware_revision": "24.05.1", 00:28:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.554 "oacs": { 00:28:33.554 "security": 0, 00:28:33.554 "format": 0, 00:28:33.554 "firmware": 0, 00:28:33.554 "ns_manage": 0 00:28:33.554 }, 00:28:33.554 "multi_ctrlr": true, 00:28:33.554 "ana_reporting": false 00:28:33.554 }, 00:28:33.554 "vs": { 00:28:33.554 "nvme_version": "1.3" 00:28:33.554 }, 00:28:33.554 "ns_data": { 00:28:33.554 "id": 1, 00:28:33.554 "can_share": true 00:28:33.554 } 00:28:33.554 } 00:28:33.554 ], 00:28:33.554 "mp_policy": "active_passive" 00:28:33.554 } 00:28:33.554 } 00:28:33.554 ] 00:28:33.554 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.554 03:27:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:33.554 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.554 03:27:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.555 [2024-07-23 03:27:59.963957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:33.555 [2024-07-23 03:27:59.964047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1131760 (9): Bad file descriptor 00:28:33.555 [2024-07-23 03:28:00.106782] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:33.555 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.555 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:33.555 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.555 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.555 [ 00:28:33.555 { 00:28:33.555 "name": "nvme0n1", 00:28:33.555 "aliases": [ 00:28:33.555 "e5dbf74d-9093-4724-b150-c2a917a53b63" 00:28:33.555 ], 00:28:33.555 "product_name": "NVMe disk", 00:28:33.555 "block_size": 512, 00:28:33.555 "num_blocks": 2097152, 00:28:33.555 "uuid": "e5dbf74d-9093-4724-b150-c2a917a53b63", 00:28:33.555 "assigned_rate_limits": { 00:28:33.555 "rw_ios_per_sec": 0, 00:28:33.555 "rw_mbytes_per_sec": 0, 00:28:33.555 "r_mbytes_per_sec": 0, 00:28:33.555 "w_mbytes_per_sec": 0 00:28:33.555 }, 00:28:33.555 "claimed": false, 00:28:33.555 "zoned": false, 00:28:33.555 "supported_io_types": { 00:28:33.555 "read": true, 00:28:33.555 "write": true, 00:28:33.555 "unmap": false, 00:28:33.555 "write_zeroes": true, 00:28:33.555 "flush": true, 00:28:33.555 "reset": true, 00:28:33.555 "compare": true, 00:28:33.555 "compare_and_write": true, 00:28:33.555 "abort": true, 00:28:33.555 "nvme_admin": true, 00:28:33.555 "nvme_io": true 00:28:33.555 }, 00:28:33.555 "memory_domains": [ 00:28:33.555 { 00:28:33.555 "dma_device_id": "system", 00:28:33.555 "dma_device_type": 1 00:28:33.555 } 00:28:33.555 ], 00:28:33.555 "driver_specific": { 00:28:33.555 "nvme": [ 00:28:33.555 { 00:28:33.555 "trid": { 00:28:33.555 "trtype": "TCP", 00:28:33.555 "adrfam": "IPv4", 00:28:33.555 "traddr": "10.0.0.2", 00:28:33.555 "trsvcid": "4420", 00:28:33.555 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:33.555 }, 00:28:33.555 "ctrlr_data": { 00:28:33.555 "cntlid": 2, 00:28:33.555 "vendor_id": "0x8086", 00:28:33.555 "model_number": "SPDK bdev Controller", 00:28:33.555 "serial_number": "00000000000000000000", 00:28:33.555 "firmware_revision": "24.05.1", 00:28:33.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.555 "oacs": { 00:28:33.555 "security": 0, 00:28:33.555 "format": 0, 00:28:33.555 "firmware": 0, 00:28:33.555 "ns_manage": 0 00:28:33.555 }, 00:28:33.555 "multi_ctrlr": true, 00:28:33.555 "ana_reporting": false 00:28:33.555 }, 00:28:33.555 "vs": { 00:28:33.555 "nvme_version": "1.3" 00:28:33.555 }, 00:28:33.555 "ns_data": { 00:28:33.555 "id": 1, 00:28:33.555 "can_share": true 00:28:33.555 } 00:28:33.555 } 00:28:33.555 ], 00:28:33.555 "mp_policy": "active_passive" 00:28:33.555 } 00:28:33.555 } 00:28:33.555 ] 00:28:33.555 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.555 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.555 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.555 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hjQ5sPtvdk 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hjQ5sPtvdk 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.813 [2024-07-23 03:28:00.156631] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:33.813 [2024-07-23 03:28:00.156821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hjQ5sPtvdk 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.813 [2024-07-23 03:28:00.164641] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:33.813 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hjQ5sPtvdk 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.814 [2024-07-23 03:28:00.172671] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:33.814 [2024-07-23 03:28:00.172740] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:33.814 nvme0n1 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.814 [ 00:28:33.814 { 00:28:33.814 "name": "nvme0n1", 00:28:33.814 "aliases": [ 00:28:33.814 "e5dbf74d-9093-4724-b150-c2a917a53b63" 00:28:33.814 ], 00:28:33.814 "product_name": "NVMe disk", 00:28:33.814 "block_size": 512, 00:28:33.814 "num_blocks": 2097152, 00:28:33.814 "uuid": "e5dbf74d-9093-4724-b150-c2a917a53b63", 00:28:33.814 "assigned_rate_limits": { 00:28:33.814 "rw_ios_per_sec": 0, 00:28:33.814 "rw_mbytes_per_sec": 0, 00:28:33.814 "r_mbytes_per_sec": 0, 00:28:33.814 "w_mbytes_per_sec": 0 00:28:33.814 }, 00:28:33.814 "claimed": false, 00:28:33.814 "zoned": false, 00:28:33.814 "supported_io_types": { 00:28:33.814 "read": true, 00:28:33.814 "write": true, 00:28:33.814 "unmap": false, 00:28:33.814 "write_zeroes": true, 00:28:33.814 "flush": true, 00:28:33.814 "reset": true, 00:28:33.814 "compare": true, 00:28:33.814 "compare_and_write": true, 00:28:33.814 "abort": true, 00:28:33.814 "nvme_admin": true, 00:28:33.814 "nvme_io": true 00:28:33.814 }, 00:28:33.814 "memory_domains": [ 00:28:33.814 { 00:28:33.814 "dma_device_id": "system", 00:28:33.814 "dma_device_type": 1 00:28:33.814 } 00:28:33.814 ], 00:28:33.814 "driver_specific": { 00:28:33.814 "nvme": [ 00:28:33.814 { 00:28:33.814 "trid": { 00:28:33.814 "trtype": "TCP", 00:28:33.814 "adrfam": "IPv4", 00:28:33.814 "traddr": "10.0.0.2", 00:28:33.814 "trsvcid": "4421", 00:28:33.814 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:33.814 }, 00:28:33.814 "ctrlr_data": { 00:28:33.814 "cntlid": 3, 00:28:33.814 "vendor_id": "0x8086", 00:28:33.814 "model_number": "SPDK bdev Controller", 00:28:33.814 "serial_number": "00000000000000000000", 00:28:33.814 "firmware_revision": "24.05.1", 00:28:33.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.814 "oacs": { 00:28:33.814 "security": 0, 00:28:33.814 "format": 0, 00:28:33.814 "firmware": 0, 00:28:33.814 "ns_manage": 0 00:28:33.814 }, 00:28:33.814 "multi_ctrlr": true, 00:28:33.814 "ana_reporting": false 00:28:33.814 }, 00:28:33.814 "vs": { 00:28:33.814 "nvme_version": "1.3" 00:28:33.814 }, 00:28:33.814 "ns_data": { 00:28:33.814 "id": 1, 00:28:33.814 "can_share": true 00:28:33.814 } 00:28:33.814 } 00:28:33.814 ], 00:28:33.814 "mp_policy": "active_passive" 00:28:33.814 } 00:28:33.814 } 00:28:33.814 ] 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.hjQ5sPtvdk 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:33.814 rmmod nvme_tcp 00:28:33.814 rmmod nvme_fabrics 00:28:33.814 rmmod nvme_keyring 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 532161 ']' 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 532161 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 532161 ']' 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 532161 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 532161 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 532161' 00:28:33.814 killing process with pid 532161 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 532161 00:28:33.814 [2024-07-23 03:28:00.346825] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:33.814 [2024-07-23 03:28:00.346861] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:33.814 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 532161 00:28:34.073 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:34.073 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:34.073 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:34.073 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:34.073 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:34.073 03:28:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.073 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.073 03:28:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.608 03:28:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:36.608 00:28:36.608 real 0m5.427s 00:28:36.608 user 0m2.029s 00:28:36.608 sys 0m1.794s 00:28:36.608 03:28:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:36.608 03:28:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:36.608 ************************************ 00:28:36.608 END TEST nvmf_async_init 00:28:36.608 ************************************ 00:28:36.608 03:28:02 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:36.608 03:28:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:36.608 03:28:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:36.608 03:28:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:36.608 ************************************ 00:28:36.608 START TEST dma 00:28:36.608 ************************************ 00:28:36.608 03:28:02 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:36.608 * Looking for test storage... 00:28:36.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:36.608 03:28:02 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.608 03:28:02 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.608 03:28:02 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.608 03:28:02 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.608 03:28:02 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.608 03:28:02 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.608 03:28:02 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.608 03:28:02 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:36.608 03:28:02 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:36.608 03:28:02 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:36.608 03:28:02 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:36.608 03:28:02 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:36.608 00:28:36.608 real 0m0.070s 00:28:36.608 user 0m0.034s 00:28:36.608 sys 0m0.041s 00:28:36.608 03:28:02 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:36.608 03:28:02 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:36.608 ************************************ 00:28:36.608 END TEST dma 00:28:36.608 ************************************ 00:28:36.608 03:28:02 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:36.608 03:28:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:36.608 03:28:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:36.608 03:28:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:36.608 ************************************ 00:28:36.608 START TEST nvmf_identify 00:28:36.608 ************************************ 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:36.608 * Looking for test storage... 00:28:36.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.608 03:28:02 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:36.609 03:28:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:38.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:38.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:38.512 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:38.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:38.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:28:38.512 00:28:38.512 --- 10.0.0.2 ping statistics --- 00:28:38.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.512 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:28:38.512 00:28:38.512 --- 10.0.0.1 ping statistics --- 00:28:38.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.512 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:38.512 03:28:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=534289 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 534289 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 534289 ']' 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:38.513 03:28:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.513 [2024-07-23 03:28:04.945641] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:38.513 [2024-07-23 03:28:04.945742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.513 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.513 [2024-07-23 03:28:05.018576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.771 [2024-07-23 03:28:05.114400] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.772 [2024-07-23 03:28:05.114458] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.772 [2024-07-23 03:28:05.114485] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.772 [2024-07-23 03:28:05.114498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.772 [2024-07-23 03:28:05.114510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.772 [2024-07-23 03:28:05.114580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.772 [2024-07-23 03:28:05.114659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.772 [2024-07-23 03:28:05.114733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.772 [2024-07-23 03:28:05.114735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.772 [2024-07-23 03:28:05.243373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.772 Malloc0 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.772 [2024-07-23 03:28:05.320763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:38.772 [ 00:28:38.772 { 00:28:38.772 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:38.772 "subtype": "Discovery", 00:28:38.772 "listen_addresses": [ 00:28:38.772 { 00:28:38.772 "trtype": "TCP", 00:28:38.772 "adrfam": "IPv4", 00:28:38.772 "traddr": "10.0.0.2", 00:28:38.772 "trsvcid": "4420" 00:28:38.772 } 00:28:38.772 ], 00:28:38.772 "allow_any_host": true, 00:28:38.772 "hosts": [] 00:28:38.772 }, 00:28:38.772 { 00:28:38.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.772 "subtype": "NVMe", 00:28:38.772 "listen_addresses": [ 00:28:38.772 { 00:28:38.772 "trtype": "TCP", 00:28:38.772 "adrfam": "IPv4", 00:28:38.772 "traddr": "10.0.0.2", 00:28:38.772 "trsvcid": "4420" 00:28:38.772 } 00:28:38.772 ], 00:28:38.772 "allow_any_host": true, 00:28:38.772 "hosts": [], 00:28:38.772 "serial_number": "SPDK00000000000001", 00:28:38.772 "model_number": "SPDK bdev Controller", 00:28:38.772 "max_namespaces": 32, 00:28:38.772 "min_cntlid": 1, 00:28:38.772 "max_cntlid": 65519, 00:28:38.772 "namespaces": [ 00:28:38.772 { 00:28:38.772 "nsid": 1, 00:28:38.772 "bdev_name": "Malloc0", 00:28:38.772 "name": "Malloc0", 00:28:38.772 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:38.772 "eui64": "ABCDEF0123456789", 00:28:38.772 "uuid": "38b97a13-3606-45ef-96d6-746904a37e27" 00:28:38.772 } 00:28:38.772 ] 00:28:38.772 } 00:28:38.772 ] 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.772 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:39.032 [2024-07-23 03:28:05.362422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:39.032 [2024-07-23 03:28:05.362466] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid534426 ] 00:28:39.032 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.032 [2024-07-23 03:28:05.399005] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:39.032 [2024-07-23 03:28:05.399076] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:39.032 [2024-07-23 03:28:05.399086] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:39.032 [2024-07-23 03:28:05.399102] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:39.032 [2024-07-23 03:28:05.399117] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:39.032 [2024-07-23 03:28:05.399477] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:39.032 [2024-07-23 03:28:05.399532] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12cc980 0 00:28:39.032 [2024-07-23 03:28:05.405641] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:39.032 [2024-07-23 03:28:05.405666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:39.032 [2024-07-23 03:28:05.405675] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:39.032 [2024-07-23 03:28:05.405681] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:39.032 [2024-07-23 03:28:05.405739] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.405753] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.405761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.032 [2024-07-23 03:28:05.405782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:39.032 [2024-07-23 03:28:05.405810] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.032 [2024-07-23 03:28:05.413631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.032 [2024-07-23 03:28:05.413649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.032 [2024-07-23 03:28:05.413657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.413666] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.032 [2024-07-23 03:28:05.413685] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:39.032 [2024-07-23 03:28:05.413698] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:39.032 [2024-07-23 03:28:05.413709] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:39.032 [2024-07-23 03:28:05.413733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.413742] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.413748] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.032 [2024-07-23 03:28:05.413760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.032 [2024-07-23 03:28:05.413784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.032 [2024-07-23 03:28:05.413966] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.032 [2024-07-23 03:28:05.413982] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.032 [2024-07-23 03:28:05.413989] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.413996] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.032 [2024-07-23 03:28:05.414011] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:39.032 [2024-07-23 03:28:05.414026] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:39.032 [2024-07-23 03:28:05.414044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.414053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.414059] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.032 [2024-07-23 03:28:05.414070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.032 [2024-07-23 03:28:05.414092] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.032 [2024-07-23 03:28:05.414331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.032 [2024-07-23 03:28:05.414346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.032 [2024-07-23 03:28:05.414354] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.032 [2024-07-23 03:28:05.414360] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.032 [2024-07-23 03:28:05.414373] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:39.032 [2024-07-23 03:28:05.414387] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:39.032 [2024-07-23 03:28:05.414400] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.414407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.414414] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.414424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.033 [2024-07-23 03:28:05.414445] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.033 [2024-07-23 03:28:05.414582] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.033 [2024-07-23 03:28:05.414597] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.033 [2024-07-23 03:28:05.414603] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.414610] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.033 [2024-07-23 03:28:05.414629] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:39.033 [2024-07-23 03:28:05.414647] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.414656] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.414662] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.414673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.033 [2024-07-23 03:28:05.414694] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.033 [2024-07-23 03:28:05.414829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.033 [2024-07-23 03:28:05.414841] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.033 [2024-07-23 03:28:05.414847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.414854] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.033 [2024-07-23 03:28:05.414865] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:39.033 [2024-07-23 03:28:05.414874] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:39.033 [2024-07-23 03:28:05.414886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:39.033 [2024-07-23 03:28:05.414997] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:39.033 [2024-07-23 03:28:05.415011] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:39.033 [2024-07-23 03:28:05.415028] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415035] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415041] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.415052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.033 [2024-07-23 03:28:05.415072] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.033 [2024-07-23 03:28:05.415261] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.033 [2024-07-23 03:28:05.415273] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.033 [2024-07-23 03:28:05.415280] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.033 [2024-07-23 03:28:05.415297] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:39.033 [2024-07-23 03:28:05.415313] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415322] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415329] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.415339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.033 [2024-07-23 03:28:05.415360] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.033 [2024-07-23 03:28:05.415496] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.033 [2024-07-23 03:28:05.415511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.033 [2024-07-23 03:28:05.415518] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415524] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.033 [2024-07-23 03:28:05.415534] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:39.033 [2024-07-23 03:28:05.415543] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:39.033 [2024-07-23 03:28:05.415556] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:39.033 [2024-07-23 03:28:05.415571] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:39.033 [2024-07-23 03:28:05.415590] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415599] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.415610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.033 [2024-07-23 03:28:05.415655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.033 [2024-07-23 03:28:05.415907] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.033 [2024-07-23 03:28:05.415920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.033 [2024-07-23 03:28:05.415927] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415935] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12cc980): datao=0, datal=4096, cccid=0 00:28:39.033 [2024-07-23 03:28:05.415946] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13344c0) on tqpair(0x12cc980): expected_datao=0, payload_size=4096 00:28:39.033 [2024-07-23 03:28:05.415956] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415979] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.415990] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.033 [2024-07-23 03:28:05.416097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.033 [2024-07-23 03:28:05.416104] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.033 [2024-07-23 03:28:05.416131] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:39.033 [2024-07-23 03:28:05.416141] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:39.033 [2024-07-23 03:28:05.416149] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:39.033 [2024-07-23 03:28:05.416158] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:39.033 [2024-07-23 03:28:05.416166] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:39.033 [2024-07-23 03:28:05.416175] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:39.033 [2024-07-23 03:28:05.416190] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:39.033 [2024-07-23 03:28:05.416203] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416211] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416217] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.416228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:39.033 [2024-07-23 03:28:05.416250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.033 [2024-07-23 03:28:05.416500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.033 [2024-07-23 03:28:05.416516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.033 [2024-07-23 03:28:05.416522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416529] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13344c0) on tqpair=0x12cc980 00:28:39.033 [2024-07-23 03:28:05.416547] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416555] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416561] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.416571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.033 [2024-07-23 03:28:05.416581] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416594] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.416603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.033 [2024-07-23 03:28:05.416620] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416650] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416664] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.416674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.033 [2024-07-23 03:28:05.416685] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.033 [2024-07-23 03:28:05.416698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12cc980) 00:28:39.033 [2024-07-23 03:28:05.416706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.033 [2024-07-23 03:28:05.416715] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:39.033 [2024-07-23 03:28:05.416736] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:39.033 [2024-07-23 03:28:05.416748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.416755] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12cc980) 00:28:39.034 [2024-07-23 03:28:05.416765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.034 [2024-07-23 03:28:05.416787] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13344c0, cid 0, qid 0 00:28:39.034 [2024-07-23 03:28:05.416814] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1334620, cid 1, qid 0 00:28:39.034 [2024-07-23 03:28:05.416822] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1334780, cid 2, qid 0 00:28:39.034 [2024-07-23 03:28:05.416830] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13348e0, cid 3, qid 0 00:28:39.034 [2024-07-23 03:28:05.416837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1334a40, cid 4, qid 0 00:28:39.034 [2024-07-23 03:28:05.417026] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.034 [2024-07-23 03:28:05.417038] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.034 [2024-07-23 03:28:05.417044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.417051] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1334a40) on tqpair=0x12cc980 00:28:39.034 [2024-07-23 03:28:05.417063] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:39.034 [2024-07-23 03:28:05.417072] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:39.034 [2024-07-23 03:28:05.417089] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.417098] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12cc980) 00:28:39.034 [2024-07-23 03:28:05.417124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.034 [2024-07-23 03:28:05.417145] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1334a40, cid 4, qid 0 00:28:39.034 [2024-07-23 03:28:05.417350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.034 [2024-07-23 03:28:05.417363] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.034 [2024-07-23 03:28:05.417369] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.417376] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12cc980): datao=0, datal=4096, cccid=4 00:28:39.034 [2024-07-23 03:28:05.417383] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1334a40) on tqpair(0x12cc980): expected_datao=0, payload_size=4096 00:28:39.034 [2024-07-23 03:28:05.417391] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.417411] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.417420] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.458628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.034 [2024-07-23 03:28:05.458649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.034 [2024-07-23 03:28:05.458657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.458663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1334a40) on tqpair=0x12cc980 00:28:39.034 [2024-07-23 03:28:05.458685] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:39.034 [2024-07-23 03:28:05.458725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.458736] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12cc980) 00:28:39.034 [2024-07-23 03:28:05.458748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.034 [2024-07-23 03:28:05.458760] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.458767] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.458773] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12cc980) 00:28:39.034 [2024-07-23 03:28:05.458782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.034 [2024-07-23 03:28:05.458813] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1334a40, cid 4, qid 0 00:28:39.034 [2024-07-23 03:28:05.458840] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1334ba0, cid 5, qid 0 00:28:39.034 [2024-07-23 03:28:05.459042] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.034 [2024-07-23 03:28:05.459054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.034 [2024-07-23 03:28:05.459061] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.459068] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12cc980): datao=0, datal=1024, cccid=4 00:28:39.034 [2024-07-23 03:28:05.459076] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1334a40) on tqpair(0x12cc980): expected_datao=0, payload_size=1024 00:28:39.034 [2024-07-23 03:28:05.459084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.459094] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.459101] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.459110] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.034 [2024-07-23 03:28:05.459119] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.034 [2024-07-23 03:28:05.459139] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.459146] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1334ba0) on tqpair=0x12cc980 00:28:39.034 [2024-07-23 03:28:05.499789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.034 [2024-07-23 03:28:05.499808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.034 [2024-07-23 03:28:05.499816] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.499823] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1334a40) on tqpair=0x12cc980 00:28:39.034 [2024-07-23 03:28:05.499843] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.499853] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12cc980) 00:28:39.034 [2024-07-23 03:28:05.499865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.034 [2024-07-23 03:28:05.499895] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1334a40, cid 4, qid 0 00:28:39.034 [2024-07-23 03:28:05.500051] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.034 [2024-07-23 03:28:05.500064] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.034 [2024-07-23 03:28:05.500071] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500077] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12cc980): datao=0, datal=3072, cccid=4 00:28:39.034 [2024-07-23 03:28:05.500085] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1334a40) on tqpair(0x12cc980): expected_datao=0, payload_size=3072 00:28:39.034 [2024-07-23 03:28:05.500093] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500120] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500129] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.034 [2024-07-23 03:28:05.500238] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.034 [2024-07-23 03:28:05.500244] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1334a40) on tqpair=0x12cc980 00:28:39.034 [2024-07-23 03:28:05.500268] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500277] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12cc980) 00:28:39.034 [2024-07-23 03:28:05.500287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.034 [2024-07-23 03:28:05.500316] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1334a40, cid 4, qid 0 00:28:39.034 [2024-07-23 03:28:05.500464] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.034 [2024-07-23 03:28:05.500476] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.034 [2024-07-23 03:28:05.500483] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500489] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12cc980): datao=0, datal=8, cccid=4 00:28:39.034 [2024-07-23 03:28:05.500497] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1334a40) on tqpair(0x12cc980): expected_datao=0, payload_size=8 00:28:39.034 [2024-07-23 03:28:05.500504] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500514] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.500521] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.541772] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.034 [2024-07-23 03:28:05.541791] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.034 [2024-07-23 03:28:05.541799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.034 [2024-07-23 03:28:05.541806] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1334a40) on tqpair=0x12cc980 00:28:39.034 ===================================================== 00:28:39.034 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:39.034 ===================================================== 00:28:39.034 Controller Capabilities/Features 00:28:39.034 ================================ 00:28:39.034 Vendor ID: 0000 00:28:39.034 Subsystem Vendor ID: 0000 00:28:39.034 Serial Number: .................... 00:28:39.034 Model Number: ........................................ 00:28:39.034 Firmware Version: 24.05.1 00:28:39.034 Recommended Arb Burst: 0 00:28:39.034 IEEE OUI Identifier: 00 00 00 00:28:39.034 Multi-path I/O 00:28:39.034 May have multiple subsystem ports: No 00:28:39.034 May have multiple controllers: No 00:28:39.034 Associated with SR-IOV VF: No 00:28:39.034 Max Data Transfer Size: 131072 00:28:39.034 Max Number of Namespaces: 0 00:28:39.034 Max Number of I/O Queues: 1024 00:28:39.034 NVMe Specification Version (VS): 1.3 00:28:39.034 NVMe Specification Version (Identify): 1.3 00:28:39.034 Maximum Queue Entries: 128 00:28:39.034 Contiguous Queues Required: Yes 00:28:39.034 Arbitration Mechanisms Supported 00:28:39.034 Weighted Round Robin: Not Supported 00:28:39.034 Vendor Specific: Not Supported 00:28:39.034 Reset Timeout: 15000 ms 00:28:39.034 Doorbell Stride: 4 bytes 00:28:39.034 NVM Subsystem Reset: Not Supported 00:28:39.035 Command Sets Supported 00:28:39.035 NVM Command Set: Supported 00:28:39.035 Boot Partition: Not Supported 00:28:39.035 Memory Page Size Minimum: 4096 bytes 00:28:39.035 Memory Page Size Maximum: 4096 bytes 00:28:39.035 Persistent Memory Region: Not Supported 00:28:39.035 Optional Asynchronous Events Supported 00:28:39.035 Namespace Attribute Notices: Not Supported 00:28:39.035 Firmware Activation Notices: Not Supported 00:28:39.035 ANA Change Notices: Not Supported 00:28:39.035 PLE Aggregate Log Change Notices: Not Supported 00:28:39.035 LBA Status Info Alert Notices: Not Supported 00:28:39.035 EGE Aggregate Log Change Notices: Not Supported 00:28:39.035 Normal NVM Subsystem Shutdown event: Not Supported 00:28:39.035 Zone Descriptor Change Notices: Not Supported 00:28:39.035 Discovery Log Change Notices: Supported 00:28:39.035 Controller Attributes 00:28:39.035 128-bit Host Identifier: Not Supported 00:28:39.035 Non-Operational Permissive Mode: Not Supported 00:28:39.035 NVM Sets: Not Supported 00:28:39.035 Read Recovery Levels: Not Supported 00:28:39.035 Endurance Groups: Not Supported 00:28:39.035 Predictable Latency Mode: Not Supported 00:28:39.035 Traffic Based Keep ALive: Not Supported 00:28:39.035 Namespace Granularity: Not Supported 00:28:39.035 SQ Associations: Not Supported 00:28:39.035 UUID List: Not Supported 00:28:39.035 Multi-Domain Subsystem: Not Supported 00:28:39.035 Fixed Capacity Management: Not Supported 00:28:39.035 Variable Capacity Management: Not Supported 00:28:39.035 Delete Endurance Group: Not Supported 00:28:39.035 Delete NVM Set: Not Supported 00:28:39.035 Extended LBA Formats Supported: Not Supported 00:28:39.035 Flexible Data Placement Supported: Not Supported 00:28:39.035 00:28:39.035 Controller Memory Buffer Support 00:28:39.035 ================================ 00:28:39.035 Supported: No 00:28:39.035 00:28:39.035 Persistent Memory Region Support 00:28:39.035 ================================ 00:28:39.035 Supported: No 00:28:39.035 00:28:39.035 Admin Command Set Attributes 00:28:39.035 ============================ 00:28:39.035 Security Send/Receive: Not Supported 00:28:39.035 Format NVM: Not Supported 00:28:39.035 Firmware Activate/Download: Not Supported 00:28:39.035 Namespace Management: Not Supported 00:28:39.035 Device Self-Test: Not Supported 00:28:39.035 Directives: Not Supported 00:28:39.035 NVMe-MI: Not Supported 00:28:39.035 Virtualization Management: Not Supported 00:28:39.035 Doorbell Buffer Config: Not Supported 00:28:39.035 Get LBA Status Capability: Not Supported 00:28:39.035 Command & Feature Lockdown Capability: Not Supported 00:28:39.035 Abort Command Limit: 1 00:28:39.035 Async Event Request Limit: 4 00:28:39.035 Number of Firmware Slots: N/A 00:28:39.035 Firmware Slot 1 Read-Only: N/A 00:28:39.035 Firmware Activation Without Reset: N/A 00:28:39.035 Multiple Update Detection Support: N/A 00:28:39.035 Firmware Update Granularity: No Information Provided 00:28:39.035 Per-Namespace SMART Log: No 00:28:39.035 Asymmetric Namespace Access Log Page: Not Supported 00:28:39.035 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:39.035 Command Effects Log Page: Not Supported 00:28:39.035 Get Log Page Extended Data: Supported 00:28:39.035 Telemetry Log Pages: Not Supported 00:28:39.035 Persistent Event Log Pages: Not Supported 00:28:39.035 Supported Log Pages Log Page: May Support 00:28:39.035 Commands Supported & Effects Log Page: Not Supported 00:28:39.035 Feature Identifiers & Effects Log Page:May Support 00:28:39.035 NVMe-MI Commands & Effects Log Page: May Support 00:28:39.035 Data Area 4 for Telemetry Log: Not Supported 00:28:39.035 Error Log Page Entries Supported: 128 00:28:39.035 Keep Alive: Not Supported 00:28:39.035 00:28:39.035 NVM Command Set Attributes 00:28:39.035 ========================== 00:28:39.035 Submission Queue Entry Size 00:28:39.035 Max: 1 00:28:39.035 Min: 1 00:28:39.035 Completion Queue Entry Size 00:28:39.035 Max: 1 00:28:39.035 Min: 1 00:28:39.035 Number of Namespaces: 0 00:28:39.035 Compare Command: Not Supported 00:28:39.035 Write Uncorrectable Command: Not Supported 00:28:39.035 Dataset Management Command: Not Supported 00:28:39.035 Write Zeroes Command: Not Supported 00:28:39.035 Set Features Save Field: Not Supported 00:28:39.035 Reservations: Not Supported 00:28:39.035 Timestamp: Not Supported 00:28:39.035 Copy: Not Supported 00:28:39.035 Volatile Write Cache: Not Present 00:28:39.035 Atomic Write Unit (Normal): 1 00:28:39.035 Atomic Write Unit (PFail): 1 00:28:39.035 Atomic Compare & Write Unit: 1 00:28:39.035 Fused Compare & Write: Supported 00:28:39.035 Scatter-Gather List 00:28:39.035 SGL Command Set: Supported 00:28:39.035 SGL Keyed: Supported 00:28:39.035 SGL Bit Bucket Descriptor: Not Supported 00:28:39.035 SGL Metadata Pointer: Not Supported 00:28:39.035 Oversized SGL: Not Supported 00:28:39.035 SGL Metadata Address: Not Supported 00:28:39.035 SGL Offset: Supported 00:28:39.035 Transport SGL Data Block: Not Supported 00:28:39.035 Replay Protected Memory Block: Not Supported 00:28:39.035 00:28:39.035 Firmware Slot Information 00:28:39.035 ========================= 00:28:39.035 Active slot: 0 00:28:39.035 00:28:39.035 00:28:39.035 Error Log 00:28:39.035 ========= 00:28:39.035 00:28:39.035 Active Namespaces 00:28:39.035 ================= 00:28:39.035 Discovery Log Page 00:28:39.035 ================== 00:28:39.035 Generation Counter: 2 00:28:39.035 Number of Records: 2 00:28:39.035 Record Format: 0 00:28:39.035 00:28:39.035 Discovery Log Entry 0 00:28:39.035 ---------------------- 00:28:39.035 Transport Type: 3 (TCP) 00:28:39.035 Address Family: 1 (IPv4) 00:28:39.035 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:39.035 Entry Flags: 00:28:39.035 Duplicate Returned Information: 1 00:28:39.035 Explicit Persistent Connection Support for Discovery: 1 00:28:39.035 Transport Requirements: 00:28:39.035 Secure Channel: Not Required 00:28:39.035 Port ID: 0 (0x0000) 00:28:39.035 Controller ID: 65535 (0xffff) 00:28:39.035 Admin Max SQ Size: 128 00:28:39.035 Transport Service Identifier: 4420 00:28:39.035 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:39.035 Transport Address: 10.0.0.2 00:28:39.035 Discovery Log Entry 1 00:28:39.035 ---------------------- 00:28:39.035 Transport Type: 3 (TCP) 00:28:39.035 Address Family: 1 (IPv4) 00:28:39.035 Subsystem Type: 2 (NVM Subsystem) 00:28:39.035 Entry Flags: 00:28:39.035 Duplicate Returned Information: 0 00:28:39.035 Explicit Persistent Connection Support for Discovery: 0 00:28:39.035 Transport Requirements: 00:28:39.035 Secure Channel: Not Required 00:28:39.035 Port ID: 0 (0x0000) 00:28:39.035 Controller ID: 65535 (0xffff) 00:28:39.035 Admin Max SQ Size: 128 00:28:39.035 Transport Service Identifier: 4420 00:28:39.035 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:39.035 Transport Address: 10.0.0.2 [2024-07-23 03:28:05.541922] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:39.035 [2024-07-23 03:28:05.541950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.035 [2024-07-23 03:28:05.541962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.035 [2024-07-23 03:28:05.541972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.035 [2024-07-23 03:28:05.541981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.035 [2024-07-23 03:28:05.542000] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.035 [2024-07-23 03:28:05.542009] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.035 [2024-07-23 03:28:05.542016] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12cc980) 00:28:39.035 [2024-07-23 03:28:05.542030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.035 [2024-07-23 03:28:05.542056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13348e0, cid 3, qid 0 00:28:39.035 [2024-07-23 03:28:05.542212] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.035 [2024-07-23 03:28:05.542224] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.035 [2024-07-23 03:28:05.542231] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.035 [2024-07-23 03:28:05.542238] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13348e0) on tqpair=0x12cc980 00:28:39.035 [2024-07-23 03:28:05.542252] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.035 [2024-07-23 03:28:05.542260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.035 [2024-07-23 03:28:05.542266] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12cc980) 00:28:39.035 [2024-07-23 03:28:05.542276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.036 [2024-07-23 03:28:05.542303] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13348e0, cid 3, qid 0 00:28:39.036 [2024-07-23 03:28:05.542453] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.036 [2024-07-23 03:28:05.542465] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.036 [2024-07-23 03:28:05.542472] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.036 [2024-07-23 03:28:05.542478] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13348e0) on tqpair=0x12cc980 00:28:39.036 [2024-07-23 03:28:05.542489] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:39.036 [2024-07-23 03:28:05.542497] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:39.036 [2024-07-23 03:28:05.542512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.036 [2024-07-23 03:28:05.542521] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.036 [2024-07-23 03:28:05.542528] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12cc980) 00:28:39.036 [2024-07-23 03:28:05.542538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.036 [2024-07-23 03:28:05.542559] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13348e0, cid 3, qid 0 00:28:39.036 [2024-07-23 03:28:05.546630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.036 [2024-07-23 03:28:05.546647] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.036 [2024-07-23 03:28:05.546654] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.036 [2024-07-23 03:28:05.546660] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13348e0) on tqpair=0x12cc980 00:28:39.036 [2024-07-23 03:28:05.546680] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.036 [2024-07-23 03:28:05.546690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.036 [2024-07-23 03:28:05.546696] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12cc980) 00:28:39.036 [2024-07-23 03:28:05.546707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.036 [2024-07-23 03:28:05.546728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13348e0, cid 3, qid 0 00:28:39.036 [2024-07-23 03:28:05.546906] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.036 [2024-07-23 03:28:05.546918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.036 [2024-07-23 03:28:05.546925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.036 [2024-07-23 03:28:05.546932] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13348e0) on tqpair=0x12cc980 00:28:39.036 [2024-07-23 03:28:05.546952] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:28:39.036 00:28:39.036 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:39.036 [2024-07-23 03:28:05.580664] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:39.036 [2024-07-23 03:28:05.580707] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid534434 ] 00:28:39.036 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.297 [2024-07-23 03:28:05.616439] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:39.297 [2024-07-23 03:28:05.616484] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:39.297 [2024-07-23 03:28:05.616493] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:39.297 [2024-07-23 03:28:05.616506] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:39.297 [2024-07-23 03:28:05.616518] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:39.297 [2024-07-23 03:28:05.616773] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:39.297 [2024-07-23 03:28:05.616814] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x145e980 0 00:28:39.297 [2024-07-23 03:28:05.623824] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:39.297 [2024-07-23 03:28:05.623844] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:39.297 [2024-07-23 03:28:05.623852] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:39.297 [2024-07-23 03:28:05.623858] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:39.297 [2024-07-23 03:28:05.623908] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.297 [2024-07-23 03:28:05.623919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.297 [2024-07-23 03:28:05.623926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.297 [2024-07-23 03:28:05.623940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:39.298 [2024-07-23 03:28:05.623966] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.298 [2024-07-23 03:28:05.631634] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.298 [2024-07-23 03:28:05.631652] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.298 [2024-07-23 03:28:05.631659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.631666] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.298 [2024-07-23 03:28:05.631684] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:39.298 [2024-07-23 03:28:05.631710] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:39.298 [2024-07-23 03:28:05.631719] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:39.298 [2024-07-23 03:28:05.631738] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.631747] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.631754] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.298 [2024-07-23 03:28:05.631765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.298 [2024-07-23 03:28:05.631793] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.298 [2024-07-23 03:28:05.631937] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.298 [2024-07-23 03:28:05.631952] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.298 [2024-07-23 03:28:05.631959] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.631966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.298 [2024-07-23 03:28:05.631979] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:39.298 [2024-07-23 03:28:05.631994] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:39.298 [2024-07-23 03:28:05.632007] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632015] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632021] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.298 [2024-07-23 03:28:05.632032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.298 [2024-07-23 03:28:05.632054] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.298 [2024-07-23 03:28:05.632185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.298 [2024-07-23 03:28:05.632200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.298 [2024-07-23 03:28:05.632206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.298 [2024-07-23 03:28:05.632223] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:39.298 [2024-07-23 03:28:05.632237] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:39.298 [2024-07-23 03:28:05.632250] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632257] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.298 [2024-07-23 03:28:05.632274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.298 [2024-07-23 03:28:05.632295] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.298 [2024-07-23 03:28:05.632432] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.298 [2024-07-23 03:28:05.632443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.298 [2024-07-23 03:28:05.632450] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.298 [2024-07-23 03:28:05.632466] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:39.298 [2024-07-23 03:28:05.632483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632492] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.298 [2024-07-23 03:28:05.632509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.298 [2024-07-23 03:28:05.632530] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.298 [2024-07-23 03:28:05.632661] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.298 [2024-07-23 03:28:05.632679] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.298 [2024-07-23 03:28:05.632687] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632693] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.298 [2024-07-23 03:28:05.632702] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:39.298 [2024-07-23 03:28:05.632711] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:39.298 [2024-07-23 03:28:05.632724] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:39.298 [2024-07-23 03:28:05.632834] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:39.298 [2024-07-23 03:28:05.632841] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:39.298 [2024-07-23 03:28:05.632853] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.632867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.298 [2024-07-23 03:28:05.632877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.298 [2024-07-23 03:28:05.632913] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.298 [2024-07-23 03:28:05.633060] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.298 [2024-07-23 03:28:05.633073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.298 [2024-07-23 03:28:05.633080] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633086] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.298 [2024-07-23 03:28:05.633096] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:39.298 [2024-07-23 03:28:05.633112] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633122] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633128] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.298 [2024-07-23 03:28:05.633139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.298 [2024-07-23 03:28:05.633159] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.298 [2024-07-23 03:28:05.633291] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.298 [2024-07-23 03:28:05.633303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.298 [2024-07-23 03:28:05.633309] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.298 [2024-07-23 03:28:05.633325] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:39.298 [2024-07-23 03:28:05.633333] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:39.298 [2024-07-23 03:28:05.633346] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:39.298 [2024-07-23 03:28:05.633360] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:39.298 [2024-07-23 03:28:05.633375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.298 [2024-07-23 03:28:05.633398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.298 [2024-07-23 03:28:05.633419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.298 [2024-07-23 03:28:05.633587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.298 [2024-07-23 03:28:05.633599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.298 [2024-07-23 03:28:05.633606] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633621] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145e980): datao=0, datal=4096, cccid=0 00:28:39.298 [2024-07-23 03:28:05.633630] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c64c0) on tqpair(0x145e980): expected_datao=0, payload_size=4096 00:28:39.298 [2024-07-23 03:28:05.633638] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633657] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633665] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633760] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.298 [2024-07-23 03:28:05.633775] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.298 [2024-07-23 03:28:05.633781] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.298 [2024-07-23 03:28:05.633788] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.298 [2024-07-23 03:28:05.633804] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:39.298 [2024-07-23 03:28:05.633814] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:39.298 [2024-07-23 03:28:05.633821] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:39.298 [2024-07-23 03:28:05.633828] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:39.298 [2024-07-23 03:28:05.633835] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:39.299 [2024-07-23 03:28:05.633843] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.633858] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.633870] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.633877] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.633883] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.633894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:39.299 [2024-07-23 03:28:05.633916] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.299 [2024-07-23 03:28:05.634055] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.299 [2024-07-23 03:28:05.634070] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.299 [2024-07-23 03:28:05.634077] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634084] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c64c0) on tqpair=0x145e980 00:28:39.299 [2024-07-23 03:28:05.634095] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634103] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634109] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.634123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.299 [2024-07-23 03:28:05.634134] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634141] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634147] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.634156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.299 [2024-07-23 03:28:05.634165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634178] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.634187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.299 [2024-07-23 03:28:05.634212] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634219] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634225] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.634233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.299 [2024-07-23 03:28:05.634242] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.634260] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.634273] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634280] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.634290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-07-23 03:28:05.634312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c64c0, cid 0, qid 0 00:28:39.299 [2024-07-23 03:28:05.634338] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6620, cid 1, qid 0 00:28:39.299 [2024-07-23 03:28:05.634346] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6780, cid 2, qid 0 00:28:39.299 [2024-07-23 03:28:05.634353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.299 [2024-07-23 03:28:05.634361] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6a40, cid 4, qid 0 00:28:39.299 [2024-07-23 03:28:05.634521] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.299 [2024-07-23 03:28:05.634533] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.299 [2024-07-23 03:28:05.634540] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634546] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6a40) on tqpair=0x145e980 00:28:39.299 [2024-07-23 03:28:05.634556] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:39.299 [2024-07-23 03:28:05.634564] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.634578] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.634590] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.634600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634643] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.634654] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:39.299 [2024-07-23 03:28:05.634675] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6a40, cid 4, qid 0 00:28:39.299 [2024-07-23 03:28:05.634829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.299 [2024-07-23 03:28:05.634845] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.299 [2024-07-23 03:28:05.634852] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634858] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6a40) on tqpair=0x145e980 00:28:39.299 [2024-07-23 03:28:05.634927] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.634947] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.634962] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.634969] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.634994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-07-23 03:28:05.635016] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6a40, cid 4, qid 0 00:28:39.299 [2024-07-23 03:28:05.635184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.299 [2024-07-23 03:28:05.635200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.299 [2024-07-23 03:28:05.635207] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.635213] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145e980): datao=0, datal=4096, cccid=4 00:28:39.299 [2024-07-23 03:28:05.635221] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c6a40) on tqpair(0x145e980): expected_datao=0, payload_size=4096 00:28:39.299 [2024-07-23 03:28:05.635228] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.635245] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.635254] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.679642] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.299 [2024-07-23 03:28:05.679661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.299 [2024-07-23 03:28:05.679668] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.679675] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6a40) on tqpair=0x145e980 00:28:39.299 [2024-07-23 03:28:05.679692] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:39.299 [2024-07-23 03:28:05.679717] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.679736] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.679750] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.679757] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.679769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-07-23 03:28:05.679792] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6a40, cid 4, qid 0 00:28:39.299 [2024-07-23 03:28:05.679952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.299 [2024-07-23 03:28:05.679972] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.299 [2024-07-23 03:28:05.679979] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.679986] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145e980): datao=0, datal=4096, cccid=4 00:28:39.299 [2024-07-23 03:28:05.679994] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c6a40) on tqpair(0x145e980): expected_datao=0, payload_size=4096 00:28:39.299 [2024-07-23 03:28:05.680001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.680019] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.680027] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.720740] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.299 [2024-07-23 03:28:05.720759] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.299 [2024-07-23 03:28:05.720766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.720773] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6a40) on tqpair=0x145e980 00:28:39.299 [2024-07-23 03:28:05.720797] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.720817] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:39.299 [2024-07-23 03:28:05.720831] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.299 [2024-07-23 03:28:05.720839] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145e980) 00:28:39.299 [2024-07-23 03:28:05.720851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-07-23 03:28:05.720874] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6a40, cid 4, qid 0 00:28:39.300 [2024-07-23 03:28:05.721025] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.300 [2024-07-23 03:28:05.721041] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.300 [2024-07-23 03:28:05.721048] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.721054] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145e980): datao=0, datal=4096, cccid=4 00:28:39.300 [2024-07-23 03:28:05.721062] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c6a40) on tqpair(0x145e980): expected_datao=0, payload_size=4096 00:28:39.300 [2024-07-23 03:28:05.721069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.721086] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.721095] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.761736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.300 [2024-07-23 03:28:05.761755] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.300 [2024-07-23 03:28:05.761762] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.761769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6a40) on tqpair=0x145e980 00:28:39.300 [2024-07-23 03:28:05.761785] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:39.300 [2024-07-23 03:28:05.761800] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:39.300 [2024-07-23 03:28:05.761816] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:39.300 [2024-07-23 03:28:05.761827] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:39.300 [2024-07-23 03:28:05.761836] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:39.300 [2024-07-23 03:28:05.761848] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:39.300 [2024-07-23 03:28:05.761857] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:39.300 [2024-07-23 03:28:05.761866] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:39.300 [2024-07-23 03:28:05.761888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.761898] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.761910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-07-23 03:28:05.761921] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.761928] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.761934] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.761944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.300 [2024-07-23 03:28:05.761970] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6a40, cid 4, qid 0 00:28:39.300 [2024-07-23 03:28:05.761982] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6ba0, cid 5, qid 0 00:28:39.300 [2024-07-23 03:28:05.762123] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.300 [2024-07-23 03:28:05.762135] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.300 [2024-07-23 03:28:05.762142] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762149] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6a40) on tqpair=0x145e980 00:28:39.300 [2024-07-23 03:28:05.762160] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.300 [2024-07-23 03:28:05.762169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.300 [2024-07-23 03:28:05.762176] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762182] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6ba0) on tqpair=0x145e980 00:28:39.300 [2024-07-23 03:28:05.762199] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762207] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.762218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-07-23 03:28:05.762238] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6ba0, cid 5, qid 0 00:28:39.300 [2024-07-23 03:28:05.762380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.300 [2024-07-23 03:28:05.762395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.300 [2024-07-23 03:28:05.762402] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762409] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6ba0) on tqpair=0x145e980 00:28:39.300 [2024-07-23 03:28:05.762426] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762435] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.762446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-07-23 03:28:05.762466] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6ba0, cid 5, qid 0 00:28:39.300 [2024-07-23 03:28:05.762596] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.300 [2024-07-23 03:28:05.762608] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.300 [2024-07-23 03:28:05.762627] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762635] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6ba0) on tqpair=0x145e980 00:28:39.300 [2024-07-23 03:28:05.762653] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762662] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.762672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-07-23 03:28:05.762693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6ba0, cid 5, qid 0 00:28:39.300 [2024-07-23 03:28:05.762823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.300 [2024-07-23 03:28:05.762835] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.300 [2024-07-23 03:28:05.762842] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6ba0) on tqpair=0x145e980 00:28:39.300 [2024-07-23 03:28:05.762868] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762878] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.762888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-07-23 03:28:05.762900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.762917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-07-23 03:28:05.762928] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762935] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.762944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-07-23 03:28:05.762956] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.762963] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x145e980) 00:28:39.300 [2024-07-23 03:28:05.762972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-07-23 03:28:05.762994] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6ba0, cid 5, qid 0 00:28:39.300 [2024-07-23 03:28:05.763004] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6a40, cid 4, qid 0 00:28:39.300 [2024-07-23 03:28:05.763012] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6d00, cid 6, qid 0 00:28:39.300 [2024-07-23 03:28:05.763020] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6e60, cid 7, qid 0 00:28:39.300 [2024-07-23 03:28:05.763229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.300 [2024-07-23 03:28:05.763241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.300 [2024-07-23 03:28:05.763248] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763254] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145e980): datao=0, datal=8192, cccid=5 00:28:39.300 [2024-07-23 03:28:05.763262] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c6ba0) on tqpair(0x145e980): expected_datao=0, payload_size=8192 00:28:39.300 [2024-07-23 03:28:05.763269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763296] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763309] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763319] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.300 [2024-07-23 03:28:05.763328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.300 [2024-07-23 03:28:05.763334] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763340] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145e980): datao=0, datal=512, cccid=4 00:28:39.300 [2024-07-23 03:28:05.763348] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c6a40) on tqpair(0x145e980): expected_datao=0, payload_size=512 00:28:39.300 [2024-07-23 03:28:05.763355] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763364] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763371] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.300 [2024-07-23 03:28:05.763388] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.300 [2024-07-23 03:28:05.763394] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763400] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145e980): datao=0, datal=512, cccid=6 00:28:39.300 [2024-07-23 03:28:05.763407] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c6d00) on tqpair(0x145e980): expected_datao=0, payload_size=512 00:28:39.300 [2024-07-23 03:28:05.763415] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.300 [2024-07-23 03:28:05.763424] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.763430] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.763439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.301 [2024-07-23 03:28:05.763447] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.301 [2024-07-23 03:28:05.763453] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.763460] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145e980): datao=0, datal=4096, cccid=7 00:28:39.301 [2024-07-23 03:28:05.763467] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c6e60) on tqpair(0x145e980): expected_datao=0, payload_size=4096 00:28:39.301 [2024-07-23 03:28:05.763474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.763483] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.763490] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.763517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.301 [2024-07-23 03:28:05.763527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.301 [2024-07-23 03:28:05.763533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.763539] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6ba0) on tqpair=0x145e980 00:28:39.301 [2024-07-23 03:28:05.763558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.301 [2024-07-23 03:28:05.763584] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.301 [2024-07-23 03:28:05.763590] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.763597] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6a40) on tqpair=0x145e980 00:28:39.301 [2024-07-23 03:28:05.763612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.301 [2024-07-23 03:28:05.767639] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.301 [2024-07-23 03:28:05.767646] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.767653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6d00) on tqpair=0x145e980 00:28:39.301 [2024-07-23 03:28:05.767671] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.301 [2024-07-23 03:28:05.767681] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.301 [2024-07-23 03:28:05.767691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.301 [2024-07-23 03:28:05.767698] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6e60) on tqpair=0x145e980 00:28:39.301 ===================================================== 00:28:39.301 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.301 ===================================================== 00:28:39.301 Controller Capabilities/Features 00:28:39.301 ================================ 00:28:39.301 Vendor ID: 8086 00:28:39.301 Subsystem Vendor ID: 8086 00:28:39.301 Serial Number: SPDK00000000000001 00:28:39.301 Model Number: SPDK bdev Controller 00:28:39.301 Firmware Version: 24.05.1 00:28:39.301 Recommended Arb Burst: 6 00:28:39.301 IEEE OUI Identifier: e4 d2 5c 00:28:39.301 Multi-path I/O 00:28:39.301 May have multiple subsystem ports: Yes 00:28:39.301 May have multiple controllers: Yes 00:28:39.301 Associated with SR-IOV VF: No 00:28:39.301 Max Data Transfer Size: 131072 00:28:39.301 Max Number of Namespaces: 32 00:28:39.301 Max Number of I/O Queues: 127 00:28:39.301 NVMe Specification Version (VS): 1.3 00:28:39.301 NVMe Specification Version (Identify): 1.3 00:28:39.301 Maximum Queue Entries: 128 00:28:39.301 Contiguous Queues Required: Yes 00:28:39.301 Arbitration Mechanisms Supported 00:28:39.301 Weighted Round Robin: Not Supported 00:28:39.301 Vendor Specific: Not Supported 00:28:39.301 Reset Timeout: 15000 ms 00:28:39.301 Doorbell Stride: 4 bytes 00:28:39.301 NVM Subsystem Reset: Not Supported 00:28:39.301 Command Sets Supported 00:28:39.301 NVM Command Set: Supported 00:28:39.301 Boot Partition: Not Supported 00:28:39.301 Memory Page Size Minimum: 4096 bytes 00:28:39.301 Memory Page Size Maximum: 4096 bytes 00:28:39.301 Persistent Memory Region: Not Supported 00:28:39.301 Optional Asynchronous Events Supported 00:28:39.301 Namespace Attribute Notices: Supported 00:28:39.301 Firmware Activation Notices: Not Supported 00:28:39.301 ANA Change Notices: Not Supported 00:28:39.301 PLE Aggregate Log Change Notices: Not Supported 00:28:39.301 LBA Status Info Alert Notices: Not Supported 00:28:39.301 EGE Aggregate Log Change Notices: Not Supported 00:28:39.301 Normal NVM Subsystem Shutdown event: Not Supported 00:28:39.301 Zone Descriptor Change Notices: Not Supported 00:28:39.301 Discovery Log Change Notices: Not Supported 00:28:39.301 Controller Attributes 00:28:39.301 128-bit Host Identifier: Supported 00:28:39.301 Non-Operational Permissive Mode: Not Supported 00:28:39.301 NVM Sets: Not Supported 00:28:39.301 Read Recovery Levels: Not Supported 00:28:39.301 Endurance Groups: Not Supported 00:28:39.301 Predictable Latency Mode: Not Supported 00:28:39.301 Traffic Based Keep ALive: Not Supported 00:28:39.301 Namespace Granularity: Not Supported 00:28:39.301 SQ Associations: Not Supported 00:28:39.301 UUID List: Not Supported 00:28:39.301 Multi-Domain Subsystem: Not Supported 00:28:39.301 Fixed Capacity Management: Not Supported 00:28:39.301 Variable Capacity Management: Not Supported 00:28:39.301 Delete Endurance Group: Not Supported 00:28:39.301 Delete NVM Set: Not Supported 00:28:39.301 Extended LBA Formats Supported: Not Supported 00:28:39.301 Flexible Data Placement Supported: Not Supported 00:28:39.301 00:28:39.301 Controller Memory Buffer Support 00:28:39.301 ================================ 00:28:39.301 Supported: No 00:28:39.301 00:28:39.301 Persistent Memory Region Support 00:28:39.301 ================================ 00:28:39.301 Supported: No 00:28:39.301 00:28:39.301 Admin Command Set Attributes 00:28:39.301 ============================ 00:28:39.301 Security Send/Receive: Not Supported 00:28:39.301 Format NVM: Not Supported 00:28:39.301 Firmware Activate/Download: Not Supported 00:28:39.301 Namespace Management: Not Supported 00:28:39.301 Device Self-Test: Not Supported 00:28:39.301 Directives: Not Supported 00:28:39.301 NVMe-MI: Not Supported 00:28:39.301 Virtualization Management: Not Supported 00:28:39.301 Doorbell Buffer Config: Not Supported 00:28:39.301 Get LBA Status Capability: Not Supported 00:28:39.301 Command & Feature Lockdown Capability: Not Supported 00:28:39.301 Abort Command Limit: 4 00:28:39.301 Async Event Request Limit: 4 00:28:39.301 Number of Firmware Slots: N/A 00:28:39.301 Firmware Slot 1 Read-Only: N/A 00:28:39.301 Firmware Activation Without Reset: N/A 00:28:39.301 Multiple Update Detection Support: N/A 00:28:39.301 Firmware Update Granularity: No Information Provided 00:28:39.301 Per-Namespace SMART Log: No 00:28:39.301 Asymmetric Namespace Access Log Page: Not Supported 00:28:39.301 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:39.301 Command Effects Log Page: Supported 00:28:39.301 Get Log Page Extended Data: Supported 00:28:39.301 Telemetry Log Pages: Not Supported 00:28:39.301 Persistent Event Log Pages: Not Supported 00:28:39.301 Supported Log Pages Log Page: May Support 00:28:39.301 Commands Supported & Effects Log Page: Not Supported 00:28:39.301 Feature Identifiers & Effects Log Page:May Support 00:28:39.301 NVMe-MI Commands & Effects Log Page: May Support 00:28:39.301 Data Area 4 for Telemetry Log: Not Supported 00:28:39.301 Error Log Page Entries Supported: 128 00:28:39.301 Keep Alive: Supported 00:28:39.301 Keep Alive Granularity: 10000 ms 00:28:39.301 00:28:39.301 NVM Command Set Attributes 00:28:39.301 ========================== 00:28:39.301 Submission Queue Entry Size 00:28:39.301 Max: 64 00:28:39.301 Min: 64 00:28:39.301 Completion Queue Entry Size 00:28:39.301 Max: 16 00:28:39.301 Min: 16 00:28:39.301 Number of Namespaces: 32 00:28:39.301 Compare Command: Supported 00:28:39.301 Write Uncorrectable Command: Not Supported 00:28:39.301 Dataset Management Command: Supported 00:28:39.301 Write Zeroes Command: Supported 00:28:39.301 Set Features Save Field: Not Supported 00:28:39.301 Reservations: Supported 00:28:39.301 Timestamp: Not Supported 00:28:39.301 Copy: Supported 00:28:39.301 Volatile Write Cache: Present 00:28:39.301 Atomic Write Unit (Normal): 1 00:28:39.301 Atomic Write Unit (PFail): 1 00:28:39.301 Atomic Compare & Write Unit: 1 00:28:39.301 Fused Compare & Write: Supported 00:28:39.301 Scatter-Gather List 00:28:39.301 SGL Command Set: Supported 00:28:39.301 SGL Keyed: Supported 00:28:39.301 SGL Bit Bucket Descriptor: Not Supported 00:28:39.301 SGL Metadata Pointer: Not Supported 00:28:39.301 Oversized SGL: Not Supported 00:28:39.301 SGL Metadata Address: Not Supported 00:28:39.301 SGL Offset: Supported 00:28:39.301 Transport SGL Data Block: Not Supported 00:28:39.301 Replay Protected Memory Block: Not Supported 00:28:39.301 00:28:39.301 Firmware Slot Information 00:28:39.301 ========================= 00:28:39.301 Active slot: 1 00:28:39.301 Slot 1 Firmware Revision: 24.05.1 00:28:39.301 00:28:39.301 00:28:39.301 Commands Supported and Effects 00:28:39.301 ============================== 00:28:39.301 Admin Commands 00:28:39.301 -------------- 00:28:39.301 Get Log Page (02h): Supported 00:28:39.302 Identify (06h): Supported 00:28:39.302 Abort (08h): Supported 00:28:39.302 Set Features (09h): Supported 00:28:39.302 Get Features (0Ah): Supported 00:28:39.302 Asynchronous Event Request (0Ch): Supported 00:28:39.302 Keep Alive (18h): Supported 00:28:39.302 I/O Commands 00:28:39.302 ------------ 00:28:39.302 Flush (00h): Supported LBA-Change 00:28:39.302 Write (01h): Supported LBA-Change 00:28:39.302 Read (02h): Supported 00:28:39.302 Compare (05h): Supported 00:28:39.302 Write Zeroes (08h): Supported LBA-Change 00:28:39.302 Dataset Management (09h): Supported LBA-Change 00:28:39.302 Copy (19h): Supported LBA-Change 00:28:39.302 Unknown (79h): Supported LBA-Change 00:28:39.302 Unknown (7Ah): Supported 00:28:39.302 00:28:39.302 Error Log 00:28:39.302 ========= 00:28:39.302 00:28:39.302 Arbitration 00:28:39.302 =========== 00:28:39.302 Arbitration Burst: 1 00:28:39.302 00:28:39.302 Power Management 00:28:39.302 ================ 00:28:39.302 Number of Power States: 1 00:28:39.302 Current Power State: Power State #0 00:28:39.302 Power State #0: 00:28:39.302 Max Power: 0.00 W 00:28:39.302 Non-Operational State: Operational 00:28:39.302 Entry Latency: Not Reported 00:28:39.302 Exit Latency: Not Reported 00:28:39.302 Relative Read Throughput: 0 00:28:39.302 Relative Read Latency: 0 00:28:39.302 Relative Write Throughput: 0 00:28:39.302 Relative Write Latency: 0 00:28:39.302 Idle Power: Not Reported 00:28:39.302 Active Power: Not Reported 00:28:39.302 Non-Operational Permissive Mode: Not Supported 00:28:39.302 00:28:39.302 Health Information 00:28:39.302 ================== 00:28:39.302 Critical Warnings: 00:28:39.302 Available Spare Space: OK 00:28:39.302 Temperature: OK 00:28:39.302 Device Reliability: OK 00:28:39.302 Read Only: No 00:28:39.302 Volatile Memory Backup: OK 00:28:39.302 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:39.302 Temperature Threshold: [2024-07-23 03:28:05.767817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.767829] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x145e980) 00:28:39.302 [2024-07-23 03:28:05.767841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.302 [2024-07-23 03:28:05.767865] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c6e60, cid 7, qid 0 00:28:39.302 [2024-07-23 03:28:05.768018] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.302 [2024-07-23 03:28:05.768033] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.302 [2024-07-23 03:28:05.768040] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768047] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c6e60) on tqpair=0x145e980 00:28:39.302 [2024-07-23 03:28:05.768086] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:39.302 [2024-07-23 03:28:05.768108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.302 [2024-07-23 03:28:05.768120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.302 [2024-07-23 03:28:05.768130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.302 [2024-07-23 03:28:05.768139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.302 [2024-07-23 03:28:05.768152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768159] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768181] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.302 [2024-07-23 03:28:05.768191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.302 [2024-07-23 03:28:05.768213] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.302 [2024-07-23 03:28:05.768368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.302 [2024-07-23 03:28:05.768381] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.302 [2024-07-23 03:28:05.768388] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768394] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.302 [2024-07-23 03:28:05.768407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768414] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768421] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.302 [2024-07-23 03:28:05.768431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.302 [2024-07-23 03:28:05.768457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.302 [2024-07-23 03:28:05.768605] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.302 [2024-07-23 03:28:05.768631] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.302 [2024-07-23 03:28:05.768644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768651] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.302 [2024-07-23 03:28:05.768660] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:39.302 [2024-07-23 03:28:05.768672] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:39.302 [2024-07-23 03:28:05.768691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768706] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.302 [2024-07-23 03:28:05.768717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.302 [2024-07-23 03:28:05.768739] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.302 [2024-07-23 03:28:05.768876] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.302 [2024-07-23 03:28:05.768887] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.302 [2024-07-23 03:28:05.768894] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768900] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.302 [2024-07-23 03:28:05.768918] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768927] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.302 [2024-07-23 03:28:05.768933] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.768944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.768964] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.769092] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.769104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.769110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769117] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.769134] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769143] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.769160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.769180] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.769306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.769318] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.769324] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769331] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.769348] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769363] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.769374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.769394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.769524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.769539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.769546] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769552] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.769574] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.769602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.769632] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.769765] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.769777] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.769783] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.769807] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769816] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.769823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.769833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.769853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.769986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.770001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.770007] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770014] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.770031] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770041] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.770058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.770078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.770211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.770225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.770232] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770238] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.770256] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.770282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.770303] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.770435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.770449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.770456] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770463] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.770484] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770495] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770501] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.770512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.770532] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.770660] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.770673] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.770680] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770686] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.770704] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770719] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.770730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.770750] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.770897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.770911] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.770918] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.770942] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.770958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.770968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.770989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.771115] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.771126] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.771133] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.771140] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.771157] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.771166] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.771172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.771183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.771203] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.771343] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.771355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.771361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.771368] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.771385] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.771398] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.771405] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.303 [2024-07-23 03:28:05.771415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-07-23 03:28:05.771436] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.303 [2024-07-23 03:28:05.771564] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.303 [2024-07-23 03:28:05.771576] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.303 [2024-07-23 03:28:05.771583] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.303 [2024-07-23 03:28:05.771589] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.303 [2024-07-23 03:28:05.771606] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.304 [2024-07-23 03:28:05.775626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.304 [2024-07-23 03:28:05.775637] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145e980) 00:28:39.304 [2024-07-23 03:28:05.775649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-07-23 03:28:05.775673] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c68e0, cid 3, qid 0 00:28:39.304 [2024-07-23 03:28:05.775815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.304 [2024-07-23 03:28:05.775827] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.304 [2024-07-23 03:28:05.775833] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.304 [2024-07-23 03:28:05.775840] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14c68e0) on tqpair=0x145e980 00:28:39.304 [2024-07-23 03:28:05.775854] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:39.304 0 Kelvin (-273 Celsius) 00:28:39.304 Available Spare: 0% 00:28:39.304 Available Spare Threshold: 0% 00:28:39.304 Life Percentage Used: 0% 00:28:39.304 Data Units Read: 0 00:28:39.304 Data Units Written: 0 00:28:39.304 Host Read Commands: 0 00:28:39.304 Host Write Commands: 0 00:28:39.304 Controller Busy Time: 0 minutes 00:28:39.304 Power Cycles: 0 00:28:39.304 Power On Hours: 0 hours 00:28:39.304 Unsafe Shutdowns: 0 00:28:39.304 Unrecoverable Media Errors: 0 00:28:39.304 Lifetime Error Log Entries: 0 00:28:39.304 Warning Temperature Time: 0 minutes 00:28:39.304 Critical Temperature Time: 0 minutes 00:28:39.304 00:28:39.304 Number of Queues 00:28:39.304 ================ 00:28:39.304 Number of I/O Submission Queues: 127 00:28:39.304 Number of I/O Completion Queues: 127 00:28:39.304 00:28:39.304 Active Namespaces 00:28:39.304 ================= 00:28:39.304 Namespace ID:1 00:28:39.304 Error Recovery Timeout: Unlimited 00:28:39.304 Command Set Identifier: NVM (00h) 00:28:39.304 Deallocate: Supported 00:28:39.304 Deallocated/Unwritten Error: Not Supported 00:28:39.304 Deallocated Read Value: Unknown 00:28:39.304 Deallocate in Write Zeroes: Not Supported 00:28:39.304 Deallocated Guard Field: 0xFFFF 00:28:39.304 Flush: Supported 00:28:39.304 Reservation: Supported 00:28:39.304 Namespace Sharing Capabilities: Multiple Controllers 00:28:39.304 Size (in LBAs): 131072 (0GiB) 00:28:39.304 Capacity (in LBAs): 131072 (0GiB) 00:28:39.304 Utilization (in LBAs): 131072 (0GiB) 00:28:39.304 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:39.304 EUI64: ABCDEF0123456789 00:28:39.304 UUID: 38b97a13-3606-45ef-96d6-746904a37e27 00:28:39.304 Thin Provisioning: Not Supported 00:28:39.304 Per-NS Atomic Units: Yes 00:28:39.304 Atomic Boundary Size (Normal): 0 00:28:39.304 Atomic Boundary Size (PFail): 0 00:28:39.304 Atomic Boundary Offset: 0 00:28:39.304 Maximum Single Source Range Length: 65535 00:28:39.304 Maximum Copy Length: 65535 00:28:39.304 Maximum Source Range Count: 1 00:28:39.304 NGUID/EUI64 Never Reused: No 00:28:39.304 Namespace Write Protected: No 00:28:39.304 Number of LBA Formats: 1 00:28:39.304 Current LBA Format: LBA Format #00 00:28:39.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:39.304 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.304 rmmod nvme_tcp 00:28:39.304 rmmod nvme_fabrics 00:28:39.304 rmmod nvme_keyring 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 534289 ']' 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 534289 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 534289 ']' 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 534289 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:39.304 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 534289 00:28:39.563 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:39.563 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:39.563 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 534289' 00:28:39.563 killing process with pid 534289 00:28:39.563 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 534289 00:28:39.563 03:28:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 534289 00:28:39.563 03:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:39.563 03:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:39.563 03:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:39.563 03:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.563 03:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:39.563 03:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.563 03:28:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.563 03:28:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.097 03:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.097 00:28:42.097 real 0m5.411s 00:28:42.097 user 0m4.686s 00:28:42.097 sys 0m1.790s 00:28:42.097 03:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:42.097 03:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:42.097 ************************************ 00:28:42.097 END TEST nvmf_identify 00:28:42.097 ************************************ 00:28:42.097 03:28:08 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:42.097 03:28:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:42.097 03:28:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:42.097 03:28:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.097 ************************************ 00:28:42.097 START TEST nvmf_perf 00:28:42.097 ************************************ 00:28:42.097 03:28:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:42.097 * Looking for test storage... 00:28:42.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:42.098 03:28:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.472 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:43.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.473 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:43.731 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:43.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.731 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:43.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:43.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:28:43.732 00:28:43.732 --- 10.0.0.2 ping statistics --- 00:28:43.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.732 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:28:43.732 00:28:43.732 --- 10.0.0.1 ping statistics --- 00:28:43.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.732 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=536361 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 536361 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 536361 ']' 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:43.732 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:43.732 [2024-07-23 03:28:10.247093] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:43.732 [2024-07-23 03:28:10.247181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.732 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.990 [2024-07-23 03:28:10.324120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:43.990 [2024-07-23 03:28:10.416163] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.990 [2024-07-23 03:28:10.416225] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.990 [2024-07-23 03:28:10.416240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.990 [2024-07-23 03:28:10.416253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.990 [2024-07-23 03:28:10.416266] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.990 [2024-07-23 03:28:10.416329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.990 [2024-07-23 03:28:10.416366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.990 [2024-07-23 03:28:10.416488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.990 [2024-07-23 03:28:10.416489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.990 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:43.990 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:43.990 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.990 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.990 03:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:43.990 03:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.990 03:28:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:43.990 03:28:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:47.268 03:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:47.268 03:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:47.525 03:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:47.525 03:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:47.783 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:47.783 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:47.783 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:47.783 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:47.783 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:48.039 [2024-07-23 03:28:14.387305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.039 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.295 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:48.295 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.560 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:48.560 03:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:48.830 03:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.830 [2024-07-23 03:28:15.394867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.088 03:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:49.088 03:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:49.088 03:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:49.088 03:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:49.088 03:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:50.460 Initializing NVMe Controllers 00:28:50.460 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:50.460 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:50.460 Initialization complete. Launching workers. 00:28:50.460 ======================================================== 00:28:50.460 Latency(us) 00:28:50.460 Device Information : IOPS MiB/s Average min max 00:28:50.460 PCIE (0000:88:00.0) NSID 1 from core 0: 84701.75 330.87 377.10 43.04 7291.30 00:28:50.460 ======================================================== 00:28:50.460 Total : 84701.75 330.87 377.10 43.04 7291.30 00:28:50.460 00:28:50.460 03:28:16 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.460 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.833 Initializing NVMe Controllers 00:28:51.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:51.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:51.833 Initialization complete. Launching workers. 00:28:51.833 ======================================================== 00:28:51.833 Latency(us) 00:28:51.833 Device Information : IOPS MiB/s Average min max 00:28:51.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.00 0.29 13818.91 198.35 45921.21 00:28:51.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18649.94 5986.34 49887.40 00:28:51.833 ======================================================== 00:28:51.833 Total : 131.00 0.51 15884.08 198.35 49887.40 00:28:51.833 00:28:51.833 03:28:18 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.833 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.767 Initializing NVMe Controllers 00:28:52.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:52.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:52.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:52.767 Initialization complete. Launching workers. 00:28:52.767 ======================================================== 00:28:52.767 Latency(us) 00:28:52.767 Device Information : IOPS MiB/s Average min max 00:28:52.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6446.40 25.18 4965.28 991.69 12162.18 00:28:52.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3847.30 15.03 8344.83 6534.31 18902.58 00:28:52.767 ======================================================== 00:28:52.767 Total : 10293.71 40.21 6228.40 991.69 18902.58 00:28:52.767 00:28:52.767 03:28:19 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:52.767 03:28:19 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:52.767 03:28:19 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.767 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.297 Initializing NVMe Controllers 00:28:55.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.297 Controller IO queue size 128, less than required. 00:28:55.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.297 Controller IO queue size 128, less than required. 00:28:55.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:55.297 Initialization complete. Launching workers. 00:28:55.297 ======================================================== 00:28:55.297 Latency(us) 00:28:55.297 Device Information : IOPS MiB/s Average min max 00:28:55.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 995.00 248.75 130996.04 77383.03 186346.82 00:28:55.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.50 149.12 223879.19 87542.78 339689.08 00:28:55.297 ======================================================== 00:28:55.297 Total : 1591.50 397.88 165808.98 77383.03 339689.08 00:28:55.297 00:28:55.297 03:28:21 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:55.297 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.555 No valid NVMe controllers or AIO or URING devices found 00:28:55.555 Initializing NVMe Controllers 00:28:55.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.555 Controller IO queue size 128, less than required. 00:28:55.555 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.555 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:55.555 Controller IO queue size 128, less than required. 00:28:55.555 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.555 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:55.555 WARNING: Some requested NVMe devices were skipped 00:28:55.555 03:28:22 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:55.555 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.083 Initializing NVMe Controllers 00:28:58.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.083 Controller IO queue size 128, less than required. 00:28:58.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.083 Controller IO queue size 128, less than required. 00:28:58.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:58.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:58.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:58.083 Initialization complete. Launching workers. 00:28:58.083 00:28:58.083 ==================== 00:28:58.083 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:58.083 TCP transport: 00:28:58.083 polls: 32169 00:28:58.083 idle_polls: 10286 00:28:58.083 sock_completions: 21883 00:28:58.083 nvme_completions: 4013 00:28:58.083 submitted_requests: 5954 00:28:58.083 queued_requests: 1 00:28:58.083 00:28:58.083 ==================== 00:28:58.083 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:58.083 TCP transport: 00:28:58.083 polls: 31872 00:28:58.083 idle_polls: 9687 00:28:58.083 sock_completions: 22185 00:28:58.083 nvme_completions: 3371 00:28:58.083 submitted_requests: 5022 00:28:58.083 queued_requests: 1 00:28:58.083 ======================================================== 00:28:58.083 Latency(us) 00:28:58.083 Device Information : IOPS MiB/s Average min max 00:28:58.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1001.86 250.46 132468.22 63067.58 189636.41 00:28:58.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 841.54 210.39 154176.30 55770.84 221796.72 00:28:58.083 ======================================================== 00:28:58.083 Total : 1843.40 460.85 142378.30 55770.84 221796.72 00:28:58.083 00:28:58.083 03:28:24 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:58.083 03:28:24 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:58.649 03:28:24 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:58.649 03:28:24 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:58.649 03:28:24 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0498e117-761f-4620-83bf-f4bf702fb25f 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0498e117-761f-4620-83bf-f4bf702fb25f 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=0498e117-761f-4620-83bf-f4bf702fb25f 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:01.923 { 00:29:01.923 "uuid": "0498e117-761f-4620-83bf-f4bf702fb25f", 00:29:01.923 "name": "lvs_0", 00:29:01.923 "base_bdev": "Nvme0n1", 00:29:01.923 "total_data_clusters": 238234, 00:29:01.923 "free_clusters": 238234, 00:29:01.923 "block_size": 512, 00:29:01.923 "cluster_size": 4194304 00:29:01.923 } 00:29:01.923 ]' 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="0498e117-761f-4620-83bf-f4bf702fb25f") .free_clusters' 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:29:01.923 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="0498e117-761f-4620-83bf-f4bf702fb25f") .cluster_size' 00:29:02.180 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:02.180 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:29:02.180 03:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:29:02.180 952936 00:29:02.180 03:28:28 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:02.180 03:28:28 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:02.180 03:28:28 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0498e117-761f-4620-83bf-f4bf702fb25f lbd_0 20480 00:29:02.744 03:28:29 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=58673a4c-d2c6-4740-92fe-5c2573966a9a 00:29:02.744 03:28:29 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 58673a4c-d2c6-4740-92fe-5c2573966a9a lvs_n_0 00:29:03.676 03:28:29 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1047a248-34a6-4f41-96b7-dead1b1ff9c1 00:29:03.676 03:28:29 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1047a248-34a6-4f41-96b7-dead1b1ff9c1 00:29:03.676 03:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=1047a248-34a6-4f41-96b7-dead1b1ff9c1 00:29:03.676 03:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:03.676 03:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:03.676 03:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:03.676 03:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:03.676 { 00:29:03.676 "uuid": "0498e117-761f-4620-83bf-f4bf702fb25f", 00:29:03.676 "name": "lvs_0", 00:29:03.676 "base_bdev": "Nvme0n1", 00:29:03.676 "total_data_clusters": 238234, 00:29:03.676 "free_clusters": 233114, 00:29:03.676 "block_size": 512, 00:29:03.676 "cluster_size": 4194304 00:29:03.676 }, 00:29:03.676 { 00:29:03.676 "uuid": "1047a248-34a6-4f41-96b7-dead1b1ff9c1", 00:29:03.676 "name": "lvs_n_0", 00:29:03.676 "base_bdev": "58673a4c-d2c6-4740-92fe-5c2573966a9a", 00:29:03.676 "total_data_clusters": 5114, 00:29:03.676 "free_clusters": 5114, 00:29:03.676 "block_size": 512, 00:29:03.676 "cluster_size": 4194304 00:29:03.676 } 00:29:03.676 ]' 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="1047a248-34a6-4f41-96b7-dead1b1ff9c1") .free_clusters' 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="1047a248-34a6-4f41-96b7-dead1b1ff9c1") .cluster_size' 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:29:03.676 20456 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:03.676 03:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1047a248-34a6-4f41-96b7-dead1b1ff9c1 lbd_nest_0 20456 00:29:03.933 03:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ca7e0a89-4c10-4faa-aaf3-0dbbd07ddcbf 00:29:03.933 03:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.191 03:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:04.191 03:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ca7e0a89-4c10-4faa-aaf3-0dbbd07ddcbf 00:29:04.448 03:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.704 03:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:04.704 03:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:04.704 03:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:04.704 03:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:04.704 03:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.704 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.897 Initializing NVMe Controllers 00:29:16.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.897 Initialization complete. Launching workers. 00:29:16.897 ======================================================== 00:29:16.897 Latency(us) 00:29:16.897 Device Information : IOPS MiB/s Average min max 00:29:16.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.50 0.02 21095.10 238.54 48465.86 00:29:16.897 ======================================================== 00:29:16.898 Total : 47.50 0.02 21095.10 238.54 48465.86 00:29:16.898 00:29:16.898 03:28:41 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:16.898 03:28:41 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:16.898 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.858 Initializing NVMe Controllers 00:29:26.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:26.858 Initialization complete. Launching workers. 00:29:26.858 ======================================================== 00:29:26.858 Latency(us) 00:29:26.858 Device Information : IOPS MiB/s Average min max 00:29:26.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.70 10.09 12410.16 4972.51 50862.39 00:29:26.858 ======================================================== 00:29:26.858 Total : 80.70 10.09 12410.16 4972.51 50862.39 00:29:26.858 00:29:26.858 03:28:52 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:26.858 03:28:52 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:26.858 03:28:52 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.858 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.820 Initializing NVMe Controllers 00:29:36.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.820 Initialization complete. Launching workers. 00:29:36.820 ======================================================== 00:29:36.820 Latency(us) 00:29:36.820 Device Information : IOPS MiB/s Average min max 00:29:36.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6472.03 3.16 4954.93 310.19 47833.17 00:29:36.820 ======================================================== 00:29:36.820 Total : 6472.03 3.16 4954.93 310.19 47833.17 00:29:36.820 00:29:36.820 03:29:02 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:36.820 03:29:02 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.820 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.783 Initializing NVMe Controllers 00:29:46.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.783 Initialization complete. Launching workers. 00:29:46.783 ======================================================== 00:29:46.783 Latency(us) 00:29:46.783 Device Information : IOPS MiB/s Average min max 00:29:46.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1893.74 236.72 16900.59 1589.68 40821.98 00:29:46.783 ======================================================== 00:29:46.783 Total : 1893.74 236.72 16900.59 1589.68 40821.98 00:29:46.783 00:29:46.783 03:29:12 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:46.783 03:29:12 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:46.783 03:29:12 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:46.783 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.762 Initializing NVMe Controllers 00:29:56.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.762 Controller IO queue size 128, less than required. 00:29:56.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:56.763 Initialization complete. Launching workers. 00:29:56.763 ======================================================== 00:29:56.763 Latency(us) 00:29:56.763 Device Information : IOPS MiB/s Average min max 00:29:56.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11707.28 5.72 10933.93 1643.99 25290.88 00:29:56.763 ======================================================== 00:29:56.763 Total : 11707.28 5.72 10933.93 1643.99 25290.88 00:29:56.763 00:29:56.763 03:29:23 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:56.763 03:29:23 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:56.763 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.002 Initializing NVMe Controllers 00:30:09.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.002 Controller IO queue size 128, less than required. 00:30:09.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:09.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:09.002 Initialization complete. Launching workers. 00:30:09.002 ======================================================== 00:30:09.002 Latency(us) 00:30:09.002 Device Information : IOPS MiB/s Average min max 00:30:09.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1214.40 151.80 105765.64 15848.22 215366.93 00:30:09.002 ======================================================== 00:30:09.002 Total : 1214.40 151.80 105765.64 15848.22 215366.93 00:30:09.002 00:30:09.002 03:29:33 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.002 03:29:33 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ca7e0a89-4c10-4faa-aaf3-0dbbd07ddcbf 00:30:09.002 03:29:34 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:09.003 03:29:34 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 58673a4c-d2c6-4740-92fe-5c2573966a9a 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:09.003 rmmod nvme_tcp 00:30:09.003 rmmod nvme_fabrics 00:30:09.003 rmmod nvme_keyring 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 536361 ']' 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 536361 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 536361 ']' 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 536361 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 536361 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 536361' 00:30:09.003 killing process with pid 536361 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 536361 00:30:09.003 03:29:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 536361 00:30:10.904 03:29:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:10.904 03:29:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:10.904 03:29:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:10.904 03:29:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:10.904 03:29:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:10.904 03:29:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.904 03:29:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:10.904 03:29:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.810 03:29:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.810 00:30:12.810 real 1m30.832s 00:30:12.810 user 5m36.308s 00:30:12.810 sys 0m15.562s 00:30:12.810 03:29:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:12.810 03:29:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:12.810 ************************************ 00:30:12.810 END TEST nvmf_perf 00:30:12.810 ************************************ 00:30:12.810 03:29:39 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:12.810 03:29:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:12.811 03:29:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:12.811 03:29:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.811 ************************************ 00:30:12.811 START TEST nvmf_fio_host 00:30:12.811 ************************************ 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:12.811 * Looking for test storage... 00:30:12.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:12.811 03:29:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:14.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:14.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:14.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:14.716 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:14.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:30:14.716 00:30:14.716 --- 10.0.0.2 ping statistics --- 00:30:14.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.716 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:30:14.716 00:30:14.716 --- 10.0.0.1 ping statistics --- 00:30:14.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.716 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:14.716 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=548322 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 548322 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 548322 ']' 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:14.717 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.976 [2024-07-23 03:29:41.328308] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:14.976 [2024-07-23 03:29:41.328399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.976 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.976 [2024-07-23 03:29:41.392699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.976 [2024-07-23 03:29:41.477828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.976 [2024-07-23 03:29:41.477879] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.976 [2024-07-23 03:29:41.477904] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.976 [2024-07-23 03:29:41.477915] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.976 [2024-07-23 03:29:41.477925] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.976 [2024-07-23 03:29:41.477991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.976 [2024-07-23 03:29:41.478021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.976 [2024-07-23 03:29:41.478076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.976 [2024-07-23 03:29:41.478078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.234 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:15.234 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:30:15.234 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:15.492 [2024-07-23 03:29:41.885356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.492 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:15.492 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.493 03:29:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.493 03:29:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:15.751 Malloc1 00:30:15.751 03:29:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:16.009 03:29:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:16.266 03:29:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.524 [2024-07-23 03:29:42.973788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.524 03:29:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:16.782 03:29:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:17.039 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:17.039 fio-3.35 00:30:17.039 Starting 1 thread 00:30:17.039 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.569 00:30:19.569 test: (groupid=0, jobs=1): err= 0: pid=548685: Tue Jul 23 03:29:45 2024 00:30:19.569 read: IOPS=9152, BW=35.8MiB/s (37.5MB/s)(71.7MiB/2006msec) 00:30:19.569 slat (nsec): min=1917, max=160651, avg=2604.27, stdev=1919.64 00:30:19.569 clat (usec): min=3355, max=13321, avg=7743.32, stdev=571.27 00:30:19.569 lat (usec): min=3385, max=13324, avg=7745.92, stdev=571.16 00:30:19.569 clat percentiles (usec): 00:30:19.569 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:30:19.569 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:30:19.569 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:30:19.569 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11863], 99.95th=[12649], 00:30:19.569 | 99.99th=[13173] 00:30:19.569 bw ( KiB/s): min=35640, max=36896, per=99.90%, avg=36572.00, stdev=621.48, samples=4 00:30:19.569 iops : min= 8910, max= 9224, avg=9143.00, stdev=155.37, samples=4 00:30:19.569 write: IOPS=9160, BW=35.8MiB/s (37.5MB/s)(71.8MiB/2006msec); 0 zone resets 00:30:19.569 slat (usec): min=2, max=140, avg= 2.73, stdev= 1.59 00:30:19.569 clat (usec): min=1434, max=12284, avg=6201.56, stdev=498.18 00:30:19.569 lat (usec): min=1443, max=12286, avg=6204.29, stdev=498.12 00:30:19.569 clat percentiles (usec): 00:30:19.569 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5800], 00:30:19.569 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:30:19.569 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:30:19.569 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9634], 99.95th=[10552], 00:30:19.569 | 99.99th=[11863] 00:30:19.569 bw ( KiB/s): min=36240, max=36952, per=100.00%, avg=36646.00, stdev=362.65, samples=4 00:30:19.569 iops : min= 9060, max= 9238, avg=9161.50, stdev=90.66, samples=4 00:30:19.569 lat (msec) : 2=0.01%, 4=0.08%, 10=99.80%, 20=0.12% 00:30:19.569 cpu : usr=54.56%, sys=38.20%, ctx=51, majf=0, minf=6 00:30:19.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:19.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.569 issued rwts: total=18360,18376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.569 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.569 00:30:19.569 Run status group 0 (all jobs): 00:30:19.569 READ: bw=35.8MiB/s (37.5MB/s), 35.8MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2006-2006msec 00:30:19.569 WRITE: bw=35.8MiB/s (37.5MB/s), 35.8MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=71.8MiB (75.3MB), run=2006-2006msec 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:19.569 03:29:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:19.570 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:19.570 fio-3.35 00:30:19.570 Starting 1 thread 00:30:19.570 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.098 00:30:22.098 test: (groupid=0, jobs=1): err= 0: pid=549014: Tue Jul 23 03:29:48 2024 00:30:22.098 read: IOPS=8369, BW=131MiB/s (137MB/s)(262MiB/2007msec) 00:30:22.098 slat (nsec): min=2982, max=92578, avg=3715.80, stdev=1723.70 00:30:22.098 clat (usec): min=2765, max=17398, avg=9131.24, stdev=2223.23 00:30:22.098 lat (usec): min=2768, max=17401, avg=9134.95, stdev=2223.30 00:30:22.098 clat percentiles (usec): 00:30:22.098 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7242], 00:30:22.098 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:30:22.098 | 70.00th=[10159], 80.00th=[10814], 90.00th=[12125], 95.00th=[13042], 00:30:22.098 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16450], 99.95th=[17171], 00:30:22.098 | 99.99th=[17433] 00:30:22.098 bw ( KiB/s): min=62784, max=75168, per=50.71%, avg=67904.00, stdev=5495.75, samples=4 00:30:22.098 iops : min= 3924, max= 4698, avg=4244.00, stdev=343.48, samples=4 00:30:22.098 write: IOPS=4758, BW=74.4MiB/s (78.0MB/s)(138MiB/1861msec); 0 zone resets 00:30:22.098 slat (usec): min=30, max=189, avg=34.16, stdev= 5.61 00:30:22.098 clat (usec): min=3925, max=18022, avg=10952.55, stdev=1893.36 00:30:22.098 lat (usec): min=3961, max=18054, avg=10986.70, stdev=1893.53 00:30:22.098 clat percentiles (usec): 00:30:22.098 | 1.00th=[ 7242], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9372], 00:30:22.098 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:30:22.098 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13566], 95.00th=[14746], 00:30:22.099 | 99.00th=[15926], 99.50th=[16319], 99.90th=[17433], 99.95th=[17957], 00:30:22.099 | 99.99th=[17957] 00:30:22.099 bw ( KiB/s): min=65664, max=77088, per=92.65%, avg=70544.00, stdev=5351.85, samples=4 00:30:22.099 iops : min= 4104, max= 4818, avg=4409.00, stdev=334.49, samples=4 00:30:22.099 lat (msec) : 4=0.24%, 10=54.57%, 20=45.19% 00:30:22.099 cpu : usr=74.59%, sys=22.22%, ctx=20, majf=0, minf=2 00:30:22.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:22.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:22.099 issued rwts: total=16798,8856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:22.099 00:30:22.099 Run status group 0 (all jobs): 00:30:22.099 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=262MiB (275MB), run=2007-2007msec 00:30:22.099 WRITE: bw=74.4MiB/s (78.0MB/s), 74.4MiB/s-74.4MiB/s (78.0MB/s-78.0MB/s), io=138MiB (145MB), run=1861-1861msec 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:22.099 03:29:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:25.410 Nvme0n1 00:30:25.410 03:29:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=205d071e-75a4-48f2-8d52-77ac74305c20 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 205d071e-75a4-48f2-8d52-77ac74305c20 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=205d071e-75a4-48f2-8d52-77ac74305c20 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:28.686 { 00:30:28.686 "uuid": "205d071e-75a4-48f2-8d52-77ac74305c20", 00:30:28.686 "name": "lvs_0", 00:30:28.686 "base_bdev": "Nvme0n1", 00:30:28.686 "total_data_clusters": 930, 00:30:28.686 "free_clusters": 930, 00:30:28.686 "block_size": 512, 00:30:28.686 "cluster_size": 1073741824 00:30:28.686 } 00:30:28.686 ]' 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="205d071e-75a4-48f2-8d52-77ac74305c20") .free_clusters' 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="205d071e-75a4-48f2-8d52-77ac74305c20") .cluster_size' 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:28.686 952320 00:30:28.686 03:29:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:28.944 300bbb79-e00d-4013-8a9d-8fbe0dbc93ce 00:30:28.944 03:29:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:29.201 03:29:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:29.459 03:29:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:29.716 03:29:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.716 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.716 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:29.716 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.716 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:29.716 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:29.717 03:29:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.974 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:29.974 fio-3.35 00:30:29.974 Starting 1 thread 00:30:29.974 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.502 00:30:32.502 test: (groupid=0, jobs=1): err= 0: pid=550291: Tue Jul 23 03:29:58 2024 00:30:32.502 read: IOPS=4811, BW=18.8MiB/s (19.7MB/s)(37.8MiB/2009msec) 00:30:32.502 slat (nsec): min=1887, max=162574, avg=2548.42, stdev=2506.83 00:30:32.502 clat (usec): min=1658, max=175660, avg=14560.32, stdev=12907.06 00:30:32.502 lat (usec): min=1661, max=175700, avg=14562.86, stdev=12907.44 00:30:32.502 clat percentiles (msec): 00:30:32.502 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:30:32.502 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 14], 00:30:32.502 | 70.00th=[ 15], 80.00th=[ 15], 90.00th=[ 16], 95.00th=[ 17], 00:30:32.502 | 99.00th=[ 18], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 176], 00:30:32.502 | 99.99th=[ 176] 00:30:32.502 bw ( KiB/s): min=13296, max=21616, per=99.70%, avg=19188.00, stdev=3948.69, samples=4 00:30:32.502 iops : min= 3324, max= 5404, avg=4797.00, stdev=987.17, samples=4 00:30:32.502 write: IOPS=4800, BW=18.8MiB/s (19.7MB/s)(37.7MiB/2009msec); 0 zone resets 00:30:32.502 slat (nsec): min=1983, max=137191, avg=2653.34, stdev=1795.99 00:30:32.502 clat (usec): min=470, max=172355, avg=11845.74, stdev=12089.27 00:30:32.502 lat (usec): min=472, max=172362, avg=11848.40, stdev=12089.66 00:30:32.502 clat percentiles (msec): 00:30:32.502 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:32.502 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:32.502 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:30:32.502 | 99.00th=[ 15], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 174], 00:30:32.502 | 99.99th=[ 174] 00:30:32.502 bw ( KiB/s): min=13928, max=20992, per=99.88%, avg=19178.00, stdev=3500.39, samples=4 00:30:32.502 iops : min= 3482, max= 5248, avg=4794.50, stdev=875.10, samples=4 00:30:32.502 lat (usec) : 500=0.01%, 750=0.01% 00:30:32.502 lat (msec) : 2=0.02%, 4=0.05%, 10=10.18%, 20=89.06%, 50=0.02% 00:30:32.502 lat (msec) : 250=0.66% 00:30:32.502 cpu : usr=55.40%, sys=40.32%, ctx=79, majf=0, minf=20 00:30:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:30:32.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:32.502 issued rwts: total=9666,9644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:32.502 00:30:32.502 Run status group 0 (all jobs): 00:30:32.502 READ: bw=18.8MiB/s (19.7MB/s), 18.8MiB/s-18.8MiB/s (19.7MB/s-19.7MB/s), io=37.8MiB (39.6MB), run=2009-2009msec 00:30:32.502 WRITE: bw=18.8MiB/s (19.7MB/s), 18.8MiB/s-18.8MiB/s (19.7MB/s-19.7MB/s), io=37.7MiB (39.5MB), run=2009-2009msec 00:30:32.502 03:29:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:32.502 03:29:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9180949c-58cf-43e2-8a91-ec2fe545533c 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9180949c-58cf-43e2-8a91-ec2fe545533c 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=9180949c-58cf-43e2-8a91-ec2fe545533c 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:33.875 { 00:30:33.875 "uuid": "205d071e-75a4-48f2-8d52-77ac74305c20", 00:30:33.875 "name": "lvs_0", 00:30:33.875 "base_bdev": "Nvme0n1", 00:30:33.875 "total_data_clusters": 930, 00:30:33.875 "free_clusters": 0, 00:30:33.875 "block_size": 512, 00:30:33.875 "cluster_size": 1073741824 00:30:33.875 }, 00:30:33.875 { 00:30:33.875 "uuid": "9180949c-58cf-43e2-8a91-ec2fe545533c", 00:30:33.875 "name": "lvs_n_0", 00:30:33.875 "base_bdev": "300bbb79-e00d-4013-8a9d-8fbe0dbc93ce", 00:30:33.875 "total_data_clusters": 237847, 00:30:33.875 "free_clusters": 237847, 00:30:33.875 "block_size": 512, 00:30:33.875 "cluster_size": 4194304 00:30:33.875 } 00:30:33.875 ]' 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9180949c-58cf-43e2-8a91-ec2fe545533c") .free_clusters' 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9180949c-58cf-43e2-8a91-ec2fe545533c") .cluster_size' 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:33.875 951388 00:30:33.875 03:30:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:34.808 cc0310d5-fe2d-4ed6-ae40-3f9873c30a9d 00:30:34.808 03:30:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:34.808 03:30:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:35.065 03:30:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:35.323 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:35.581 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:35.581 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:35.581 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.582 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:35.582 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:35.582 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:35.582 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:35.582 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:35.582 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:35.582 03:30:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:35.582 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:35.582 fio-3.35 00:30:35.582 Starting 1 thread 00:30:35.582 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.106 00:30:38.106 test: (groupid=0, jobs=1): err= 0: pid=551231: Tue Jul 23 03:30:04 2024 00:30:38.106 read: IOPS=5875, BW=22.9MiB/s (24.1MB/s)(46.1MiB/2009msec) 00:30:38.106 slat (nsec): min=1894, max=260064, avg=2509.36, stdev=3323.52 00:30:38.106 clat (usec): min=4467, max=19319, avg=12012.80, stdev=988.74 00:30:38.106 lat (usec): min=4513, max=19321, avg=12015.31, stdev=988.51 00:30:38.106 clat percentiles (usec): 00:30:38.106 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:30:38.106 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:30:38.106 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:30:38.106 | 99.00th=[14222], 99.50th=[14353], 99.90th=[17957], 99.95th=[19268], 00:30:38.106 | 99.99th=[19268] 00:30:38.106 bw ( KiB/s): min=21848, max=24168, per=99.89%, avg=23474.00, stdev=1102.98, samples=4 00:30:38.106 iops : min= 5462, max= 6042, avg=5868.50, stdev=275.75, samples=4 00:30:38.106 write: IOPS=5868, BW=22.9MiB/s (24.0MB/s)(46.1MiB/2009msec); 0 zone resets 00:30:38.106 slat (nsec): min=1973, max=191391, avg=2595.54, stdev=2109.51 00:30:38.106 clat (usec): min=3685, max=17592, avg=9544.34, stdev=888.61 00:30:38.106 lat (usec): min=3701, max=17595, avg=9546.94, stdev=888.53 00:30:38.106 clat percentiles (usec): 00:30:38.106 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:30:38.106 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:30:38.106 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:30:38.106 | 99.00th=[11469], 99.50th=[11731], 99.90th=[16712], 99.95th=[16909], 00:30:38.106 | 99.99th=[17433] 00:30:38.106 bw ( KiB/s): min=22912, max=23680, per=99.93%, avg=23456.00, stdev=363.92, samples=4 00:30:38.106 iops : min= 5728, max= 5920, avg=5864.00, stdev=90.98, samples=4 00:30:38.106 lat (msec) : 4=0.03%, 10=37.08%, 20=62.89% 00:30:38.106 cpu : usr=57.67%, sys=37.75%, ctx=93, majf=0, minf=20 00:30:38.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:38.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:38.106 issued rwts: total=11803,11789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:38.106 00:30:38.106 Run status group 0 (all jobs): 00:30:38.106 READ: bw=22.9MiB/s (24.1MB/s), 22.9MiB/s-22.9MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.3MB), run=2009-2009msec 00:30:38.106 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.1MiB (48.3MB), run=2009-2009msec 00:30:38.106 03:30:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:38.363 03:30:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:38.363 03:30:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:42.543 03:30:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:42.543 03:30:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:45.823 03:30:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:45.824 03:30:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:47.753 03:30:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:47.754 rmmod nvme_tcp 00:30:47.754 rmmod nvme_fabrics 00:30:47.754 rmmod nvme_keyring 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 548322 ']' 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 548322 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 548322 ']' 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 548322 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 548322 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 548322' 00:30:47.754 killing process with pid 548322 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 548322 00:30:47.754 03:30:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 548322 00:30:47.754 03:30:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:47.754 03:30:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:47.754 03:30:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:47.754 03:30:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:47.754 03:30:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:47.754 03:30:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.754 03:30:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:47.754 03:30:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.286 03:30:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:50.286 00:30:50.286 real 0m37.213s 00:30:50.286 user 2m22.251s 00:30:50.286 sys 0m7.214s 00:30:50.286 03:30:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:50.286 03:30:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.286 ************************************ 00:30:50.286 END TEST nvmf_fio_host 00:30:50.286 ************************************ 00:30:50.286 03:30:16 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:50.286 03:30:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:50.286 03:30:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:50.286 03:30:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:50.286 ************************************ 00:30:50.286 START TEST nvmf_failover 00:30:50.286 ************************************ 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:50.286 * Looking for test storage... 00:30:50.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:50.286 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.287 03:30:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:50.287 03:30:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.287 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:50.287 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:50.287 03:30:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:50.287 03:30:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:51.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:51.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:51.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:51.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:51.662 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:51.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:30:51.921 00:30:51.921 --- 10.0.0.2 ping statistics --- 00:30:51.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.921 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:30:51.921 00:30:51.921 --- 10.0.0.1 ping statistics --- 00:30:51.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.921 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=555018 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 555018 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 555018 ']' 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:51.921 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:51.921 [2024-07-23 03:30:18.435530] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:51.921 [2024-07-23 03:30:18.435609] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.921 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.179 [2024-07-23 03:30:18.505490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:52.179 [2024-07-23 03:30:18.600331] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.179 [2024-07-23 03:30:18.600411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.179 [2024-07-23 03:30:18.600429] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.179 [2024-07-23 03:30:18.600443] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.179 [2024-07-23 03:30:18.600455] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.179 [2024-07-23 03:30:18.600550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:52.179 [2024-07-23 03:30:18.600732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:52.179 [2024-07-23 03:30:18.600736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.179 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:52.179 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:52.179 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.179 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.179 03:30:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:52.179 03:30:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.179 03:30:18 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:52.437 [2024-07-23 03:30:18.986724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.437 03:30:19 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:53.001 Malloc0 00:30:53.001 03:30:19 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:53.001 03:30:19 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:53.567 03:30:19 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.567 [2024-07-23 03:30:20.114501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.567 03:30:20 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:53.824 [2024-07-23 03:30:20.387197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:54.081 03:30:20 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:54.081 [2024-07-23 03:30:20.635999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:54.081 03:30:20 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=555306 00:30:54.081 03:30:20 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:54.081 03:30:20 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:54.081 03:30:20 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 555306 /var/tmp/bdevperf.sock 00:30:54.081 03:30:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 555306 ']' 00:30:54.081 03:30:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:54.339 03:30:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:54.339 03:30:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:54.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:54.339 03:30:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:54.339 03:30:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:54.597 03:30:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:54.597 03:30:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:54.597 03:30:20 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.855 NVMe0n1 00:30:54.855 03:30:21 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.420 00:30:55.420 03:30:21 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=555443 00:30:55.420 03:30:21 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:55.420 03:30:21 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:56.354 03:30:22 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.612 03:30:23 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:59.894 03:30:26 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.152 00:31:00.152 03:30:26 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:00.410 03:30:26 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:03.692 03:30:29 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.692 [2024-07-23 03:30:30.078856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.692 03:30:30 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:04.625 03:30:31 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:04.884 [2024-07-23 03:30:31.335776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.335992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 [2024-07-23 03:30:31.336536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30750 is same with the state(5) to be set 00:31:04.884 03:30:31 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 555443 00:31:11.515 0 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 555306 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 555306 ']' 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 555306 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 555306 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 555306' 00:31:11.515 killing process with pid 555306 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 555306 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 555306 00:31:11.515 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:11.515 [2024-07-23 03:30:20.699414] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:11.515 [2024-07-23 03:30:20.699509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555306 ] 00:31:11.515 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.515 [2024-07-23 03:30:20.759440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.515 [2024-07-23 03:30:20.844503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.515 Running I/O for 15 seconds... 00:31:11.515 [2024-07-23 03:30:23.149553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.515 [2024-07-23 03:30:23.149641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.515 [2024-07-23 03:30:23.149696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.515 [2024-07-23 03:30:23.149727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.515 [2024-07-23 03:30:23.149758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.515 [2024-07-23 03:30:23.149788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.515 [2024-07-23 03:30:23.149820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.515 [2024-07-23 03:30:23.149851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.515 [2024-07-23 03:30:23.149883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.515 [2024-07-23 03:30:23.149930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.515 [2024-07-23 03:30:23.149961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.149977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.515 [2024-07-23 03:30:23.149990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.150015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.515 [2024-07-23 03:30:23.150030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.150045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.515 [2024-07-23 03:30:23.150058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.515 [2024-07-23 03:30:23.150073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.150974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.150988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.516 [2024-07-23 03:30:23.151249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.516 [2024-07-23 03:30:23.151264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.151771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.151982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.151996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.152025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.152053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.152082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.152111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.152139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.517 [2024-07-23 03:30:23.152168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.517 [2024-07-23 03:30:23.152213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.517 [2024-07-23 03:30:23.152260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:31:11.517 [2024-07-23 03:30:23.152274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.517 [2024-07-23 03:30:23.152368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.517 [2024-07-23 03:30:23.152400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.517 [2024-07-23 03:30:23.152433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.517 [2024-07-23 03:30:23.152462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eeb0 is same with the state(5) to be set 00:31:11.517 [2024-07-23 03:30:23.152673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.517 [2024-07-23 03:30:23.152693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.517 [2024-07-23 03:30:23.152706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:31:11.517 [2024-07-23 03:30:23.152720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.517 [2024-07-23 03:30:23.152737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.517 [2024-07-23 03:30:23.152750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.517 [2024-07-23 03:30:23.152761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:31:11.517 [2024-07-23 03:30:23.152775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.152789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.152800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.152812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.152825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.152839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.152851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.152862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.152876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.152915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.152927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.152938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85608 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.152952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.152966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.152978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.152989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85616 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85624 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85632 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85640 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85648 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85656 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85664 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85672 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85680 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85688 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85696 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85704 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85720 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85728 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85736 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85744 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.153957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85752 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.153970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.153984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.518 [2024-07-23 03:30:23.153995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.518 [2024-07-23 03:30:23.154006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85760 len:8 PRP1 0x0 PRP2 0x0 00:31:11.518 [2024-07-23 03:30:23.154019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.518 [2024-07-23 03:30:23.154032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85768 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85776 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85784 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85792 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85800 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85808 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85816 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85824 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85832 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85840 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85848 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85856 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85864 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85872 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85880 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85888 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85896 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85904 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.154963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.154977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.154988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.154999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85912 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.155013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.155026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.155037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.155049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85920 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.155062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.155082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.155094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.519 [2024-07-23 03:30:23.155106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85928 len:8 PRP1 0x0 PRP2 0x0 00:31:11.519 [2024-07-23 03:30:23.155119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.519 [2024-07-23 03:30:23.155132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.519 [2024-07-23 03:30:23.155144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85936 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85944 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85952 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85960 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85384 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85392 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85400 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85408 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85416 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85424 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85432 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85968 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85976 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85984 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85992 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.155961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86000 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.155974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.155987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.155999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.156010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86008 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.156023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.156036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.156047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.156059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86016 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.156072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.156085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.156096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.156108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86024 len:8 PRP1 0x0 PRP2 0x0 00:31:11.520 [2024-07-23 03:30:23.156121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.520 [2024-07-23 03:30:23.156135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.520 [2024-07-23 03:30:23.156146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.520 [2024-07-23 03:30:23.156157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86032 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86040 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86048 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86056 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86064 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86072 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86080 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86088 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86096 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86104 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86112 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86120 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86128 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86136 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86144 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.156961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86152 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.156974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.156988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.156998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.157010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86160 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.157023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.157036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.157047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.157058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86168 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.157071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.157085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.157096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.157109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86176 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.157126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.157140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.157152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.157164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86184 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.157178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.157196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.157208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.157220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86192 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.157233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.157247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.157258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.157270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86200 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.157283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.157297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.157308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.157319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86208 len:8 PRP1 0x0 PRP2 0x0 00:31:11.521 [2024-07-23 03:30:23.157332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.521 [2024-07-23 03:30:23.157346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.521 [2024-07-23 03:30:23.157357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.521 [2024-07-23 03:30:23.157369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86216 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.157382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.157395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.157406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.157418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86224 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.157431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.157444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.157455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.157467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86232 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.157480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.157494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.157505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.157519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86240 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.157533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.157547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.157558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.157570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86248 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.157584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.157619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.157649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.157661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86256 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86264 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86272 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86280 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85440 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85448 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85456 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85464 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85472 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85480 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85488 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85496 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.163931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.163948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.163960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.163993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.164007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.164018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.164030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.164044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.164058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.164069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.164080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.164093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.164107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.164119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.164130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.164143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.164157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.164168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.164180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 00:31:11.522 [2024-07-23 03:30:23.164193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.522 [2024-07-23 03:30:23.164207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.522 [2024-07-23 03:30:23.164218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.522 [2024-07-23 03:30:23.164229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85504 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85512 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85520 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85528 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85536 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85544 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85552 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85560 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85568 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85576 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85584 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85592 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.164961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.164973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.164985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85600 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.164998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.165011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.165022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.165034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.165047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.165060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.523 [2024-07-23 03:30:23.165071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.523 [2024-07-23 03:30:23.165083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:31:11.523 [2024-07-23 03:30:23.165096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:23.165155] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x238db50 was disconnected and freed. reset controller. 00:31:11.523 [2024-07-23 03:30:23.165173] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:11.523 [2024-07-23 03:30:23.165189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.523 [2024-07-23 03:30:23.165249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236eeb0 (9): Bad file descriptor 00:31:11.523 [2024-07-23 03:30:23.168517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.523 [2024-07-23 03:30:23.327021] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:11.523 [2024-07-23 03:30:26.830930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.523 [2024-07-23 03:30:26.831006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:26.831036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.523 [2024-07-23 03:30:26.831052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:26.831066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.523 [2024-07-23 03:30:26.831080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:26.831095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.523 [2024-07-23 03:30:26.831109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:26.831123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eeb0 is same with the state(5) to be set 00:31:11.523 [2024-07-23 03:30:26.831360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.523 [2024-07-23 03:30:26.831383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:26.831407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.523 [2024-07-23 03:30:26.831426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:26.831443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.523 [2024-07-23 03:30:26.831459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:26.831474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.523 [2024-07-23 03:30:26.831506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.523 [2024-07-23 03:30:26.831524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.523 [2024-07-23 03:30:26.831539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.831965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.831997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.524 [2024-07-23 03:30:26.832072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.524 [2024-07-23 03:30:26.832102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.524 [2024-07-23 03:30:26.832135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.524 [2024-07-23 03:30:26.832166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.524 [2024-07-23 03:30:26.832611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.524 [2024-07-23 03:30:26.832635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.832979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.832995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.525 [2024-07-23 03:30:26.833523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.525 [2024-07-23 03:30:26.833553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.525 [2024-07-23 03:30:26.833583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.525 [2024-07-23 03:30:26.833620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.525 [2024-07-23 03:30:26.833669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.525 [2024-07-23 03:30:26.833700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.525 [2024-07-23 03:30:26.833735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.525 [2024-07-23 03:30:26.833876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.525 [2024-07-23 03:30:26.833893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.833910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.833941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.833958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.833972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.833988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.834977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.834995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.835010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.835026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.835041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.835056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.835071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.835088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.835103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.835119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.835134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.835151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.835165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.526 [2024-07-23 03:30:26.835182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.526 [2024-07-23 03:30:26.835196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:26.835511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25385b0 is same with the state(5) to be set 00:31:11.527 [2024-07-23 03:30:26.835543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.527 [2024-07-23 03:30:26.835554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.527 [2024-07-23 03:30:26.835565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120936 len:8 PRP1 0x0 PRP2 0x0 00:31:11.527 [2024-07-23 03:30:26.835579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:26.835669] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25385b0 was disconnected and freed. reset controller. 00:31:11.527 [2024-07-23 03:30:26.835691] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:11.527 [2024-07-23 03:30:26.835708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.527 [2024-07-23 03:30:26.839040] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.527 [2024-07-23 03:30:26.839081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236eeb0 (9): Bad file descriptor 00:31:11.527 [2024-07-23 03:30:26.955757] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:11.527 [2024-07-23 03:30:31.338412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.527 [2024-07-23 03:30:31.338457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.338977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.338993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.339007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.339023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.339037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.339052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.339066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.339081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.339095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.339110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.339125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.339140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.339154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.339169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.339183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.339198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.527 [2024-07-23 03:30:31.339212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.527 [2024-07-23 03:30:31.339228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.528 [2024-07-23 03:30:31.339451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.528 [2024-07-23 03:30:31.339480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.528 [2024-07-23 03:30:31.339509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.528 [2024-07-23 03:30:31.339537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.528 [2024-07-23 03:30:31.339566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.528 [2024-07-23 03:30:31.339610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.528 [2024-07-23 03:30:31.339663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.528 [2024-07-23 03:30:31.339695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.339980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.339994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.528 [2024-07-23 03:30:31.340457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.528 [2024-07-23 03:30:31.340472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.529 [2024-07-23 03:30:31.340813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.340977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.340991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.529 [2024-07-23 03:30:31.341507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.529 [2024-07-23 03:30:31.341523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.530 [2024-07-23 03:30:31.341537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.530 [2024-07-23 03:30:31.341566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.530 [2024-07-23 03:30:31.341612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.530 [2024-07-23 03:30:31.341655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.341704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.341719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.530 [2024-07-23 03:30:31.341787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.530 [2024-07-23 03:30:31.341817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.530 [2024-07-23 03:30:31.341850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.530 [2024-07-23 03:30:31.341879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.341892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eeb0 is same with the state(5) to be set 00:31:11.530 [2024-07-23 03:30:31.342139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.342955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.342970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.342981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.342993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.343007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.343021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.343032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.343044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.343057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.530 [2024-07-23 03:30:31.343071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.530 [2024-07-23 03:30:31.343083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.530 [2024-07-23 03:30:31.343096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:31:11.530 [2024-07-23 03:30:31.343109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.343950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.343963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.343974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.343988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.344001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.344013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.344025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.344038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.344052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.344064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.344076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.344089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.344103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.344115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.344127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.344140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.344153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.344165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.344180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.344194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.344207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.344218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.344230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79200 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.344242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.344256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.344267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.344278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.344291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.344304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.531 [2024-07-23 03:30:31.344316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.531 [2024-07-23 03:30:31.344327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:31:11.531 [2024-07-23 03:30:31.344340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.531 [2024-07-23 03:30:31.344358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.344962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.344973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.344984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.344997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.532 [2024-07-23 03:30:31.345539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:31:11.532 [2024-07-23 03:30:31.345552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.532 [2024-07-23 03:30:31.345571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.532 [2024-07-23 03:30:31.345582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.345607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.345646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.345659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.345671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.345685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.345699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.345711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.345723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.345736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.345750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.345762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.345774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.345787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.345802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.345813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.345828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.345843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.345857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.345868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.345880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.345894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.345924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.345935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.345947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.345960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.345974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.345985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.345997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79400 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79488 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.533 [2024-07-23 03:30:31.346628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.533 [2024-07-23 03:30:31.346640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79496 len:8 PRP1 0x0 PRP2 0x0 00:31:11.533 [2024-07-23 03:30:31.346658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.533 [2024-07-23 03:30:31.346672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.346684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.346696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79504 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.346710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.346723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.346735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.346747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79512 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.346760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.346774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.346786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.346797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79520 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.346811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.346825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.346836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.346848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79528 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.346862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.346876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.346888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.346900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79536 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.346914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.346928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.346940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.346952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79544 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.346965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.346979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.346991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.347003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.347017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.347031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.347042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.347057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.347071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.347085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.347097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.347109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79568 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.347128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.347142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.347153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.347165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.352816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.352848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.352862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.352875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.352889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.352903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.352924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.352936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.352949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.352962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.352974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.352985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.352998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.534 [2024-07-23 03:30:31.353491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:31:11.534 [2024-07-23 03:30:31.353505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.534 [2024-07-23 03:30:31.353524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.534 [2024-07-23 03:30:31.353536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.353547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79680 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.353561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.353574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.353585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.353597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79688 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.353610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.353652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.353664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.353676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.353690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.353705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.353716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.353728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79704 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.353742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.353756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.353768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.353779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79712 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.353793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.353807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.353818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.353830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79720 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.353844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.353858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.353870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.353882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.353895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.353910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.353936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.353949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79736 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.353966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.353980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.353991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79744 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79752 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79760 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79768 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:11.535 [2024-07-23 03:30:31.354594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:11.535 [2024-07-23 03:30:31.354605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:31:11.535 [2024-07-23 03:30:31.354640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.535 [2024-07-23 03:30:31.354702] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23926d0 was disconnected and freed. reset controller. 00:31:11.535 [2024-07-23 03:30:31.354721] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:11.535 [2024-07-23 03:30:31.354736] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.535 [2024-07-23 03:30:31.354790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236eeb0 (9): Bad file descriptor 00:31:11.535 [2024-07-23 03:30:31.358121] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.535 [2024-07-23 03:30:31.428673] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:11.535 00:31:11.535 Latency(us) 00:31:11.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.535 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:11.535 Verification LBA range: start 0x0 length 0x4000 00:31:11.535 NVMe0n1 : 15.01 8698.78 33.98 898.36 0.00 13310.45 825.27 23592.96 00:31:11.535 =================================================================================================================== 00:31:11.536 Total : 8698.78 33.98 898.36 0.00 13310.45 825.27 23592.96 00:31:11.536 Received shutdown signal, test time was about 15.000000 seconds 00:31:11.536 00:31:11.536 Latency(us) 00:31:11.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.536 =================================================================================================================== 00:31:11.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=557168 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 557168 /var/tmp/bdevperf.sock 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 557168 ']' 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:11.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:11.536 [2024-07-23 03:30:37.776241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:11.536 03:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:11.536 [2024-07-23 03:30:38.020967] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:11.536 03:30:38 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:12.101 NVMe0n1 00:31:12.101 03:30:38 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:12.358 00:31:12.358 03:30:38 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:12.923 00:31:12.923 03:30:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:12.923 03:30:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:13.181 03:30:39 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:13.439 03:30:39 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:16.719 03:30:42 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:16.719 03:30:42 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:16.719 03:30:43 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=557909 00:31:16.719 03:30:43 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:16.719 03:30:43 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 557909 00:31:17.651 0 00:31:17.651 03:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:17.651 [2024-07-23 03:30:37.301004] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:17.651 [2024-07-23 03:30:37.301108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557168 ] 00:31:17.651 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.651 [2024-07-23 03:30:37.361717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.651 [2024-07-23 03:30:37.446886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.651 [2024-07-23 03:30:39.814790] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:17.651 [2024-07-23 03:30:39.814866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.651 [2024-07-23 03:30:39.814889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.651 [2024-07-23 03:30:39.814907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.651 [2024-07-23 03:30:39.814921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.651 [2024-07-23 03:30:39.814935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.651 [2024-07-23 03:30:39.814950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.651 [2024-07-23 03:30:39.814964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.651 [2024-07-23 03:30:39.814978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.651 [2024-07-23 03:30:39.814992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.651 [2024-07-23 03:30:39.815036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.651 [2024-07-23 03:30:39.815068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a9eb0 (9): Bad file descriptor 00:31:17.651 [2024-07-23 03:30:39.835411] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:17.651 Running I/O for 1 seconds... 00:31:17.651 00:31:17.651 Latency(us) 00:31:17.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:17.651 Verification LBA range: start 0x0 length 0x4000 00:31:17.651 NVMe0n1 : 1.01 8891.29 34.73 0.00 0.00 14337.60 1868.99 11699.39 00:31:17.651 =================================================================================================================== 00:31:17.651 Total : 8891.29 34.73 0.00 0.00 14337.60 1868.99 11699.39 00:31:17.651 03:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:17.651 03:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:17.909 03:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:18.166 03:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:18.166 03:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:18.424 03:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:18.682 03:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:21.961 03:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:21.961 03:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:21.961 03:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 557168 00:31:21.961 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 557168 ']' 00:31:21.961 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 557168 00:31:21.961 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:21.961 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:21.961 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 557168 00:31:22.219 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:22.219 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:22.219 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 557168' 00:31:22.219 killing process with pid 557168 00:31:22.219 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 557168 00:31:22.219 03:30:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 557168 00:31:22.219 03:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:22.219 03:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.477 03:30:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.477 rmmod nvme_tcp 00:31:22.477 rmmod nvme_fabrics 00:31:22.477 rmmod nvme_keyring 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 555018 ']' 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 555018 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 555018 ']' 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 555018 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:22.477 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 555018 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 555018' 00:31:22.736 killing process with pid 555018 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 555018 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 555018 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.736 03:30:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.271 03:30:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:25.271 00:31:25.271 real 0m34.998s 00:31:25.271 user 2m3.377s 00:31:25.271 sys 0m6.023s 00:31:25.271 03:30:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:25.271 03:30:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:25.271 ************************************ 00:31:25.271 END TEST nvmf_failover 00:31:25.271 ************************************ 00:31:25.271 03:30:51 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:25.271 03:30:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:25.271 03:30:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:25.271 03:30:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.271 ************************************ 00:31:25.271 START TEST nvmf_host_discovery 00:31:25.271 ************************************ 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:25.271 * Looking for test storage... 00:31:25.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:25.271 03:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.175 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.175 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:27.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:27.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:27.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:27.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:27.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:31:27.176 00:31:27.176 --- 10.0.0.2 ping statistics --- 00:31:27.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.176 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:31:27.176 00:31:27.176 --- 10.0.0.1 ping statistics --- 00:31:27.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.176 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=560555 00:31:27.176 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:27.177 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 560555 00:31:27.177 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 560555 ']' 00:31:27.177 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.177 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:27.177 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.177 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:27.177 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.177 [2024-07-23 03:30:53.619601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:27.177 [2024-07-23 03:30:53.619700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.177 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.177 [2024-07-23 03:30:53.687073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.437 [2024-07-23 03:30:53.779095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.437 [2024-07-23 03:30:53.779158] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.437 [2024-07-23 03:30:53.779174] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.437 [2024-07-23 03:30:53.779195] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.437 [2024-07-23 03:30:53.779207] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.437 [2024-07-23 03:30:53.779244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.437 [2024-07-23 03:30:53.930930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.437 [2024-07-23 03:30:53.939137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.437 null0 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.437 null1 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=560577 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 560577 /tmp/host.sock 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 560577 ']' 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:27.437 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:27.437 03:30:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.731 [2024-07-23 03:30:54.012478] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:27.731 [2024-07-23 03:30:54.012553] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560577 ] 00:31:27.731 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.731 [2024-07-23 03:30:54.075679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.731 [2024-07-23 03:30:54.166440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.731 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:27.731 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:27.731 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:27.731 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:27.731 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.731 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.989 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.990 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 [2024-07-23 03:30:54.564776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:28.248 03:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:28.815 [2024-07-23 03:30:55.349791] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:28.815 [2024-07-23 03:30:55.349828] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:28.815 [2024-07-23 03:30:55.349851] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:29.072 [2024-07-23 03:30:55.436115] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:29.072 [2024-07-23 03:30:55.620122] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:29.072 [2024-07-23 03:30:55.620149] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.331 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:29.589 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.590 [2024-07-23 03:30:55.993020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:29.590 [2024-07-23 03:30:55.993682] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:29.590 [2024-07-23 03:30:55.993715] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.590 03:30:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.590 [2024-07-23 03:30:56.121506] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:29.590 03:30:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:30.155 [2024-07-23 03:30:56.424998] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:30.155 [2024-07-23 03:30:56.425025] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:30.155 [2024-07-23 03:30:56.425035] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.737 [2024-07-23 03:30:57.217470] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:30.737 [2024-07-23 03:30:57.217504] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.737 [2024-07-23 03:30:57.221425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.737 [2024-07-23 03:30:57.221458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.737 [2024-07-23 03:30:57.221478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.737 [2024-07-23 03:30:57.221493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.737 [2024-07-23 03:30:57.221508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.737 [2024-07-23 03:30:57.221539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.737 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:30.737 [2024-07-23 03:30:57.221554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.737 [2024-07-23 03:30:57.221571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.738 [2024-07-23 03:30:57.221584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2285450 is same with the state(5) to be set 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:30.738 [2024-07-23 03:30:57.231429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2285450 (9): Bad file descriptor 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.738 [2024-07-23 03:30:57.241475] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:30.738 [2024-07-23 03:30:57.241733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.738 [2024-07-23 03:30:57.241763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285450 with addr=10.0.0.2, port=4420 00:31:30.738 [2024-07-23 03:30:57.241781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2285450 is same with the state(5) to be set 00:31:30.738 [2024-07-23 03:30:57.241810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2285450 (9): Bad file descriptor 00:31:30.738 [2024-07-23 03:30:57.241846] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.738 [2024-07-23 03:30:57.241866] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:30.738 [2024-07-23 03:30:57.241881] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.738 [2024-07-23 03:30:57.241907] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:30.738 [2024-07-23 03:30:57.251553] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:30.738 [2024-07-23 03:30:57.251771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.738 [2024-07-23 03:30:57.251799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285450 with addr=10.0.0.2, port=4420 00:31:30.738 [2024-07-23 03:30:57.251816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2285450 is same with the state(5) to be set 00:31:30.738 [2024-07-23 03:30:57.251838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2285450 (9): Bad file descriptor 00:31:30.738 [2024-07-23 03:30:57.251859] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.738 [2024-07-23 03:30:57.251874] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:30.738 [2024-07-23 03:30:57.251888] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.738 [2024-07-23 03:30:57.251925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:30.738 [2024-07-23 03:30:57.261634] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:30.738 [2024-07-23 03:30:57.261862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.738 [2024-07-23 03:30:57.261890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285450 with addr=10.0.0.2, port=4420 00:31:30.738 [2024-07-23 03:30:57.261906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2285450 is same with the state(5) to be set 00:31:30.738 [2024-07-23 03:30:57.261929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2285450 (9): Bad file descriptor 00:31:30.738 [2024-07-23 03:30:57.262008] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.738 [2024-07-23 03:30:57.262030] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:30.738 [2024-07-23 03:30:57.262044] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.738 [2024-07-23 03:30:57.262064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.738 [2024-07-23 03:30:57.271730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:30.738 [2024-07-23 03:30:57.271952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.738 [2024-07-23 03:30:57.271982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285450 with addr=10.0.0.2, port=4420 00:31:30.738 [2024-07-23 03:30:57.271999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2285450 is same with the state(5) to be set 00:31:30.738 [2024-07-23 03:30:57.272022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2285450 (9): Bad file descriptor 00:31:30.738 [2024-07-23 03:30:57.272057] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.738 [2024-07-23 03:30:57.272076] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:30.738 [2024-07-23 03:30:57.272089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.738 [2024-07-23 03:30:57.272110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:30.738 [2024-07-23 03:30:57.281805] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:30.738 [2024-07-23 03:30:57.282039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.738 [2024-07-23 03:30:57.282068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285450 with addr=10.0.0.2, port=4420 00:31:30.738 [2024-07-23 03:30:57.282084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2285450 is same with the state(5) to be set 00:31:30.738 [2024-07-23 03:30:57.282107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2285450 (9): Bad file descriptor 00:31:30.738 [2024-07-23 03:30:57.282141] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.738 [2024-07-23 03:30:57.282159] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:30.738 [2024-07-23 03:30:57.282174] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.738 [2024-07-23 03:30:57.282193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:30.738 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.738 [2024-07-23 03:30:57.291879] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:30.738 [2024-07-23 03:30:57.292094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.738 [2024-07-23 03:30:57.292120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285450 with addr=10.0.0.2, port=4420 00:31:30.738 [2024-07-23 03:30:57.292150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2285450 is same with the state(5) to be set 00:31:30.738 [2024-07-23 03:30:57.292173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2285450 (9): Bad file descriptor 00:31:30.738 [2024-07-23 03:30:57.292207] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.738 [2024-07-23 03:30:57.292225] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:30.738 [2024-07-23 03:30:57.292254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.738 [2024-07-23 03:30:57.292280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:30.996 [2024-07-23 03:30:57.301975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:30.997 [2024-07-23 03:30:57.302221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.997 [2024-07-23 03:30:57.302248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285450 with addr=10.0.0.2, port=4420 00:31:30.997 [2024-07-23 03:30:57.302266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2285450 is same with the state(5) to be set 00:31:30.997 [2024-07-23 03:30:57.302289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2285450 (9): Bad file descriptor 00:31:30.997 [2024-07-23 03:30:57.302322] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.997 [2024-07-23 03:30:57.302340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:30.997 [2024-07-23 03:30:57.302354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.997 [2024-07-23 03:30:57.302389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:30.997 [2024-07-23 03:30:57.304323] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:30.997 [2024-07-23 03:30:57.304365] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.997 03:30:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.382 [2024-07-23 03:30:58.578617] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:32.382 [2024-07-23 03:30:58.578644] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:32.382 [2024-07-23 03:30:58.578669] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:32.382 [2024-07-23 03:30:58.664938] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:32.382 [2024-07-23 03:30:58.731189] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:32.382 [2024-07-23 03:30:58.731229] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.382 request: 00:31:32.382 { 00:31:32.382 "name": "nvme", 00:31:32.382 "trtype": "tcp", 00:31:32.382 "traddr": "10.0.0.2", 00:31:32.382 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:32.382 "adrfam": "ipv4", 00:31:32.382 "trsvcid": "8009", 00:31:32.382 "wait_for_attach": true, 00:31:32.382 "method": "bdev_nvme_start_discovery", 00:31:32.382 "req_id": 1 00:31:32.382 } 00:31:32.382 Got JSON-RPC error response 00:31:32.382 response: 00:31:32.382 { 00:31:32.382 "code": -17, 00:31:32.382 "message": "File exists" 00:31:32.382 } 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.382 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.382 request: 00:31:32.382 { 00:31:32.382 "name": "nvme_second", 00:31:32.382 "trtype": "tcp", 00:31:32.382 "traddr": "10.0.0.2", 00:31:32.383 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:32.383 "adrfam": "ipv4", 00:31:32.383 "trsvcid": "8009", 00:31:32.383 "wait_for_attach": true, 00:31:32.383 "method": "bdev_nvme_start_discovery", 00:31:32.383 "req_id": 1 00:31:32.383 } 00:31:32.383 Got JSON-RPC error response 00:31:32.383 response: 00:31:32.383 { 00:31:32.383 "code": -17, 00:31:32.383 "message": "File exists" 00:31:32.383 } 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.383 03:30:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.758 [2024-07-23 03:30:59.942712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.758 [2024-07-23 03:30:59.942777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229e8a0 with addr=10.0.0.2, port=8010 00:31:33.758 [2024-07-23 03:30:59.942820] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:33.758 [2024-07-23 03:30:59.942836] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:33.758 [2024-07-23 03:30:59.942851] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:34.691 [2024-07-23 03:31:00.945148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.691 [2024-07-23 03:31:00.945208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b4cd0 with addr=10.0.0.2, port=8010 00:31:34.691 [2024-07-23 03:31:00.945244] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:34.691 [2024-07-23 03:31:00.945270] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:34.691 [2024-07-23 03:31:00.945285] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:35.625 [2024-07-23 03:31:01.947268] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:35.625 request: 00:31:35.625 { 00:31:35.625 "name": "nvme_second", 00:31:35.625 "trtype": "tcp", 00:31:35.625 "traddr": "10.0.0.2", 00:31:35.625 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:35.625 "adrfam": "ipv4", 00:31:35.625 "trsvcid": "8010", 00:31:35.625 "attach_timeout_ms": 3000, 00:31:35.625 "method": "bdev_nvme_start_discovery", 00:31:35.625 "req_id": 1 00:31:35.625 } 00:31:35.625 Got JSON-RPC error response 00:31:35.625 response: 00:31:35.625 { 00:31:35.625 "code": -110, 00:31:35.625 "message": "Connection timed out" 00:31:35.625 } 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 560577 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:35.625 03:31:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:35.625 rmmod nvme_tcp 00:31:35.625 rmmod nvme_fabrics 00:31:35.625 rmmod nvme_keyring 00:31:35.625 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:35.625 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:35.625 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:35.625 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 560555 ']' 00:31:35.625 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 560555 00:31:35.625 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 560555 ']' 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 560555 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 560555 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 560555' 00:31:35.626 killing process with pid 560555 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 560555 00:31:35.626 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 560555 00:31:35.884 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:35.884 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:35.884 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:35.884 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:35.884 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:35.884 03:31:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.884 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.884 03:31:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.783 03:31:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:37.783 00:31:37.783 real 0m12.966s 00:31:37.783 user 0m18.714s 00:31:37.783 sys 0m2.768s 00:31:37.783 03:31:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:37.783 03:31:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.783 ************************************ 00:31:37.783 END TEST nvmf_host_discovery 00:31:37.784 ************************************ 00:31:38.043 03:31:04 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:38.043 03:31:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:38.043 03:31:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:38.043 03:31:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.043 ************************************ 00:31:38.043 START TEST nvmf_host_multipath_status 00:31:38.043 ************************************ 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:38.043 * Looking for test storage... 00:31:38.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:38.043 03:31:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:39.946 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:39.946 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:39.946 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:39.946 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.946 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:40.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:31:40.208 00:31:40.208 --- 10.0.0.2 ping statistics --- 00:31:40.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.208 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:31:40.208 00:31:40.208 --- 10.0.0.1 ping statistics --- 00:31:40.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.208 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=563608 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 563608 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 563608 ']' 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:40.208 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:40.208 [2024-07-23 03:31:06.600234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:40.208 [2024-07-23 03:31:06.600305] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.208 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.208 [2024-07-23 03:31:06.663241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:40.208 [2024-07-23 03:31:06.746721] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.208 [2024-07-23 03:31:06.746778] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.208 [2024-07-23 03:31:06.746792] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.208 [2024-07-23 03:31:06.746802] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.208 [2024-07-23 03:31:06.746812] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.208 [2024-07-23 03:31:06.746961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.208 [2024-07-23 03:31:06.746967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.466 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:40.466 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:40.466 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:40.466 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.466 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:40.466 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.466 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=563608 00:31:40.466 03:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:40.724 [2024-07-23 03:31:07.161295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.724 03:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:40.982 Malloc0 00:31:40.982 03:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:41.240 03:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:41.498 03:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.755 [2024-07-23 03:31:08.176610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.755 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:42.014 [2024-07-23 03:31:08.433424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=563885 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 563885 /var/tmp/bdevperf.sock 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 563885 ']' 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:42.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:42.014 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:42.272 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:42.272 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:42.272 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:42.529 03:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:42.787 Nvme0n1 00:31:42.787 03:31:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:43.352 Nvme0n1 00:31:43.352 03:31:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:43.352 03:31:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:45.914 03:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:45.914 03:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:45.914 03:31:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:45.914 03:31:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:46.848 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:46.848 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:46.848 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.848 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:47.104 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.104 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:47.104 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.104 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:47.361 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:47.361 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:47.361 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.361 03:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:47.619 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.619 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:47.619 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.619 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:47.876 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.876 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:47.876 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.876 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:48.132 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.132 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:48.132 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.132 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:48.389 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.389 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:48.389 03:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:48.646 03:31:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:48.904 03:31:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:50.276 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:50.276 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:50.276 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.276 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:50.277 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:50.277 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:50.277 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.277 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.535 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.535 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.535 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.535 03:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.794 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.794 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.794 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.794 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:51.052 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.052 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:51.052 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.052 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:51.310 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.310 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:51.310 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.310 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.568 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.568 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:51.568 03:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:51.826 03:31:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:51.826 03:31:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:53.201 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:53.201 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:53.201 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.201 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:53.201 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.201 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:53.201 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.201 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:53.459 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:53.459 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:53.459 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.459 03:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.717 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.717 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.717 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.717 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:53.975 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.975 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:53.975 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.975 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:54.233 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.233 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:54.233 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.233 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.491 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.491 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:54.491 03:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:54.750 03:31:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:55.008 03:31:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:55.942 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:55.942 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:55.942 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.942 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:56.200 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.200 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:56.200 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.200 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.458 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.458 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.458 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.458 03:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.717 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.717 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.717 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.717 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.974 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.974 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.974 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.974 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:57.232 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.232 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:57.232 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.232 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.490 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.490 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:57.490 03:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:57.748 03:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:58.005 03:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:58.938 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:58.938 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:58.938 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.938 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:59.196 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.196 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:59.196 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.196 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:59.454 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.454 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:59.454 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.454 03:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.713 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.713 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.713 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.713 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.975 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.975 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:59.975 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.975 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:00.233 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.233 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:00.233 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.233 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:00.491 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.491 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:00.491 03:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:00.749 03:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:01.007 03:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:01.940 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:01.940 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:01.940 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.940 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:02.198 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.198 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:02.198 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.198 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:02.456 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.456 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:02.456 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.456 03:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:02.714 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.714 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:02.714 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.714 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:02.972 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.972 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:02.972 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.972 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:03.230 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.230 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:03.230 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.230 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:03.487 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.487 03:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:03.745 03:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:03.745 03:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:04.002 03:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:04.260 03:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:05.194 03:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:05.194 03:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:05.194 03:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.194 03:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:05.452 03:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.452 03:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:05.452 03:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.452 03:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:05.710 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.710 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:05.710 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.710 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:05.968 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.968 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:05.968 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.968 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:06.226 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.226 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:06.226 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.226 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:06.485 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.485 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:06.485 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.485 03:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:06.743 03:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.743 03:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:06.743 03:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:07.000 03:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:07.258 03:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:08.191 03:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:08.191 03:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:08.191 03:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.191 03:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:08.449 03:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:08.449 03:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:08.449 03:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.449 03:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:08.707 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.707 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:08.707 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.707 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:08.965 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.965 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:08.965 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.965 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:09.224 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.224 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:09.224 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.224 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:09.482 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.482 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:09.482 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.482 03:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:09.740 03:31:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.740 03:31:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:09.740 03:31:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:09.740 03:31:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:09.999 03:31:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:11.372 03:31:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:11.372 03:31:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:11.372 03:31:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.372 03:31:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:11.372 03:31:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.372 03:31:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:11.372 03:31:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.372 03:31:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:11.630 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.630 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:11.630 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.630 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:11.888 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.888 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:11.888 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.888 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:12.146 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.146 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:12.146 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.146 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:12.404 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.404 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:12.404 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.404 03:31:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:12.662 03:31:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.663 03:31:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:12.663 03:31:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:12.921 03:31:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:13.179 03:31:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:14.115 03:31:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:14.115 03:31:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:14.115 03:31:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.115 03:31:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:14.374 03:31:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.374 03:31:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:14.374 03:31:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.374 03:31:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:14.640 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:14.640 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:14.640 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.640 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:14.943 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.943 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:14.943 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.943 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:15.201 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.201 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:15.201 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.201 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:15.459 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.459 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:15.459 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.460 03:31:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 563885 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 563885 ']' 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 563885 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 563885 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 563885' 00:32:15.718 killing process with pid 563885 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 563885 00:32:15.718 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 563885 00:32:15.718 Connection closed with partial response: 00:32:15.718 00:32:15.718 00:32:15.978 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 563885 00:32:15.978 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:15.978 [2024-07-23 03:31:08.492433] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:15.978 [2024-07-23 03:31:08.492514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid563885 ] 00:32:15.978 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.978 [2024-07-23 03:31:08.553705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.978 [2024-07-23 03:31:08.642355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:15.978 Running I/O for 90 seconds... 00:32:15.978 [2024-07-23 03:31:24.141191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.141675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.141700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:15.978 [2024-07-23 03:31:24.143922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.978 [2024-07-23 03:31:24.143985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.144975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.144991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.145971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.145987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.146013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.146029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.146055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.146071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.146098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.146114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.146140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.146156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.146183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.146199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:15.979 [2024-07-23 03:31:24.146225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.979 [2024-07-23 03:31:24.146240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:24.146981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:24.146998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.546996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.547968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.547984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.980 [2024-07-23 03:31:39.548392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.980 [2024-07-23 03:31:39.548429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.980 [2024-07-23 03:31:39.548466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:15.980 [2024-07-23 03:31:39.548720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.980 [2024-07-23 03:31:39.548744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.548770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.548792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.548816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.548832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.548854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.548870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.548891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.548907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.548929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.548945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.548967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.548983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.981 [2024-07-23 03:31:39.549443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.981 [2024-07-23 03:31:39.549800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.549970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.549991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.550023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.550046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.550063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.550085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.550101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.550124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.550140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.550162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:15.981 [2024-07-23 03:31:39.550178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.550202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.550219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:15.981 [2024-07-23 03:31:39.550246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.981 [2024-07-23 03:31:39.550264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:15.981 Received shutdown signal, test time was about 32.061792 seconds 00:32:15.981 00:32:15.981 Latency(us) 00:32:15.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.982 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:15.982 Verification LBA range: start 0x0 length 0x4000 00:32:15.982 Nvme0n1 : 32.06 8009.96 31.29 0.00 0.00 15955.87 248.79 4026531.84 00:32:15.982 =================================================================================================================== 00:32:15.982 Total : 8009.96 31.29 0.00 0.00 15955.87 248.79 4026531.84 00:32:15.982 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:15.982 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:15.982 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:15.982 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:15.982 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:15.982 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:16.240 rmmod nvme_tcp 00:32:16.240 rmmod nvme_fabrics 00:32:16.240 rmmod nvme_keyring 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 563608 ']' 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 563608 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 563608 ']' 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 563608 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 563608 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 563608' 00:32:16.240 killing process with pid 563608 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 563608 00:32:16.240 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 563608 00:32:16.500 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:16.500 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:16.500 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:16.500 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:16.500 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:16.500 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.500 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:16.500 03:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.402 03:31:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:18.402 00:32:18.402 real 0m40.513s 00:32:18.402 user 2m2.198s 00:32:18.402 sys 0m10.200s 00:32:18.402 03:31:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:18.402 03:31:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:18.402 ************************************ 00:32:18.402 END TEST nvmf_host_multipath_status 00:32:18.402 ************************************ 00:32:18.402 03:31:44 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:18.402 03:31:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:18.402 03:31:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:18.402 03:31:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.402 ************************************ 00:32:18.402 START TEST nvmf_discovery_remove_ifc 00:32:18.402 ************************************ 00:32:18.402 03:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:18.661 * Looking for test storage... 00:32:18.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.661 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:18.662 03:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:20.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:20.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:20.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:20.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:20.565 03:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.565 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.565 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:20.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:32:20.566 00:32:20.566 --- 10.0.0.2 ping statistics --- 00:32:20.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.566 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:32:20.566 00:32:20.566 --- 10.0.0.1 ping statistics --- 00:32:20.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.566 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=569948 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 569948 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 569948 ']' 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:20.566 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.566 [2024-07-23 03:31:47.116002] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:20.566 [2024-07-23 03:31:47.116073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.824 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.824 [2024-07-23 03:31:47.179302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.824 [2024-07-23 03:31:47.267442] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.824 [2024-07-23 03:31:47.267497] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.824 [2024-07-23 03:31:47.267511] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.824 [2024-07-23 03:31:47.267522] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.824 [2024-07-23 03:31:47.267531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.824 [2024-07-23 03:31:47.267565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.825 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:20.825 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:20.825 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:20.825 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:20.825 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.083 [2024-07-23 03:31:47.416580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.083 [2024-07-23 03:31:47.424806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:21.083 null0 00:32:21.083 [2024-07-23 03:31:47.456725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=569972 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 569972 /tmp/host.sock 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 569972 ']' 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:21.083 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:21.083 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.083 [2024-07-23 03:31:47.525030] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:21.084 [2024-07-23 03:31:47.525111] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569972 ] 00:32:21.084 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.084 [2024-07-23 03:31:47.590809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.342 [2024-07-23 03:31:47.682148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.342 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.343 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.343 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:21.343 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.343 03:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.718 [2024-07-23 03:31:48.935546] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:22.718 [2024-07-23 03:31:48.935585] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:22.718 [2024-07-23 03:31:48.935611] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:22.718 [2024-07-23 03:31:49.063074] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:22.718 [2024-07-23 03:31:49.245325] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:22.718 [2024-07-23 03:31:49.245399] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:22.718 [2024-07-23 03:31:49.245442] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:22.718 [2024-07-23 03:31:49.245467] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:22.718 [2024-07-23 03:31:49.245509] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:22.718 [2024-07-23 03:31:49.253331] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ecedf0 was disconnected and freed. delete nvme_qpair. 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:22.718 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:22.977 03:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:23.911 03:31:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:25.285 03:31:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:26.218 03:31:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:27.152 03:31:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:28.085 03:31:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.344 [2024-07-23 03:31:54.686536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:28.344 [2024-07-23 03:31:54.686618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.344 [2024-07-23 03:31:54.686669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.344 [2024-07-23 03:31:54.686688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.344 [2024-07-23 03:31:54.686701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.344 [2024-07-23 03:31:54.686722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.344 [2024-07-23 03:31:54.686735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.344 [2024-07-23 03:31:54.686749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.344 [2024-07-23 03:31:54.686762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.344 [2024-07-23 03:31:54.686776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.344 [2024-07-23 03:31:54.686789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.344 [2024-07-23 03:31:54.686801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95f80 is same with the state(5) to be set 00:32:28.344 [2024-07-23 03:31:54.696554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95f80 (9): Bad file descriptor 00:32:28.344 [2024-07-23 03:31:54.706601] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.278 [2024-07-23 03:31:55.720668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:29.278 [2024-07-23 03:31:55.720740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95f80 with addr=10.0.0.2, port=4420 00:32:29.278 [2024-07-23 03:31:55.720771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95f80 is same with the state(5) to be set 00:32:29.278 [2024-07-23 03:31:55.720835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95f80 (9): Bad file descriptor 00:32:29.278 [2024-07-23 03:31:55.721316] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:29.278 [2024-07-23 03:31:55.721352] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:29.278 [2024-07-23 03:31:55.721370] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:29.278 [2024-07-23 03:31:55.721391] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:29.278 [2024-07-23 03:31:55.721421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:29.278 [2024-07-23 03:31:55.721442] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:29.278 03:31:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:30.212 [2024-07-23 03:31:56.723938] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:30.212 [2024-07-23 03:31:56.723972] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:30.212 [2024-07-23 03:31:56.723988] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:30.212 [2024-07-23 03:31:56.724003] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:30.212 [2024-07-23 03:31:56.724025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:30.212 [2024-07-23 03:31:56.724063] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:30.212 [2024-07-23 03:31:56.724105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.212 [2024-07-23 03:31:56.724131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.212 [2024-07-23 03:31:56.724152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.212 [2024-07-23 03:31:56.724170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.212 [2024-07-23 03:31:56.724185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.212 [2024-07-23 03:31:56.724201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.212 [2024-07-23 03:31:56.724217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.212 [2024-07-23 03:31:56.724233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.212 [2024-07-23 03:31:56.724250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.212 [2024-07-23 03:31:56.724266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.212 [2024-07-23 03:31:56.724281] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:30.212 [2024-07-23 03:31:56.724517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95410 (9): Bad file descriptor 00:32:30.212 [2024-07-23 03:31:56.725540] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:30.212 [2024-07-23 03:31:56.725565] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:30.212 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:30.469 03:31:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:31.419 03:31:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:32.371 [2024-07-23 03:31:58.740658] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:32.371 [2024-07-23 03:31:58.740683] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:32.371 [2024-07-23 03:31:58.740706] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:32.371 [2024-07-23 03:31:58.829012] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:32.371 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.371 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.371 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.371 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.371 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.371 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.371 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.371 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.371 [2024-07-23 03:31:58.932026] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:32.371 [2024-07-23 03:31:58.932079] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:32.371 [2024-07-23 03:31:58.932115] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:32.371 [2024-07-23 03:31:58.932139] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:32.371 [2024-07-23 03:31:58.932154] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:32.371 [2024-07-23 03:31:58.939119] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ea2d30 was disconnected and freed. delete nvme_qpair. 00:32:32.628 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:32.628 03:31:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 569972 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 569972 ']' 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 569972 00:32:33.561 03:31:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:33.561 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:33.561 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 569972 00:32:33.561 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:33.561 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:33.561 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 569972' 00:32:33.561 killing process with pid 569972 00:32:33.561 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 569972 00:32:33.561 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 569972 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:33.819 rmmod nvme_tcp 00:32:33.819 rmmod nvme_fabrics 00:32:33.819 rmmod nvme_keyring 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 569948 ']' 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 569948 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 569948 ']' 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 569948 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 569948 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 569948' 00:32:33.819 killing process with pid 569948 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 569948 00:32:33.819 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 569948 00:32:34.078 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:34.078 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:34.078 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:34.078 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:34.078 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:34.078 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.078 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:34.078 03:32:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.610 03:32:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:36.610 00:32:36.610 real 0m17.641s 00:32:36.610 user 0m25.819s 00:32:36.610 sys 0m2.911s 00:32:36.610 03:32:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:36.610 03:32:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:36.610 ************************************ 00:32:36.610 END TEST nvmf_discovery_remove_ifc 00:32:36.610 ************************************ 00:32:36.610 03:32:02 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:36.610 03:32:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:36.611 03:32:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:36.611 03:32:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:36.611 ************************************ 00:32:36.611 START TEST nvmf_identify_kernel_target 00:32:36.611 ************************************ 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:36.611 * Looking for test storage... 00:32:36.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:36.611 03:32:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:38.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:38.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:38.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:38.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.519 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:38.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:32:38.519 00:32:38.519 --- 10.0.0.2 ping statistics --- 00:32:38.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.519 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:32:38.520 00:32:38.520 --- 10.0.0.1 ping statistics --- 00:32:38.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.520 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:38.520 03:32:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:39.453 Waiting for block devices as requested 00:32:39.454 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:39.454 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:39.712 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:39.712 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:39.712 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:39.971 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:39.971 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:39.971 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:39.971 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:40.230 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:40.230 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:40.230 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:40.230 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:40.489 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:40.489 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:40.489 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:40.489 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:40.747 No valid GPT data, bailing 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:40.747 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:41.007 00:32:41.007 Discovery Log Number of Records 2, Generation counter 2 00:32:41.007 =====Discovery Log Entry 0====== 00:32:41.008 trtype: tcp 00:32:41.008 adrfam: ipv4 00:32:41.008 subtype: current discovery subsystem 00:32:41.008 treq: not specified, sq flow control disable supported 00:32:41.008 portid: 1 00:32:41.008 trsvcid: 4420 00:32:41.008 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:41.008 traddr: 10.0.0.1 00:32:41.008 eflags: none 00:32:41.008 sectype: none 00:32:41.008 =====Discovery Log Entry 1====== 00:32:41.008 trtype: tcp 00:32:41.008 adrfam: ipv4 00:32:41.008 subtype: nvme subsystem 00:32:41.008 treq: not specified, sq flow control disable supported 00:32:41.008 portid: 1 00:32:41.008 trsvcid: 4420 00:32:41.008 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:41.008 traddr: 10.0.0.1 00:32:41.008 eflags: none 00:32:41.008 sectype: none 00:32:41.008 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:41.008 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:41.008 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.008 ===================================================== 00:32:41.008 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:41.008 ===================================================== 00:32:41.008 Controller Capabilities/Features 00:32:41.008 ================================ 00:32:41.008 Vendor ID: 0000 00:32:41.008 Subsystem Vendor ID: 0000 00:32:41.008 Serial Number: c47b677e0e8dd39d41f1 00:32:41.008 Model Number: Linux 00:32:41.008 Firmware Version: 6.7.0-68 00:32:41.008 Recommended Arb Burst: 0 00:32:41.008 IEEE OUI Identifier: 00 00 00 00:32:41.008 Multi-path I/O 00:32:41.008 May have multiple subsystem ports: No 00:32:41.008 May have multiple controllers: No 00:32:41.008 Associated with SR-IOV VF: No 00:32:41.008 Max Data Transfer Size: Unlimited 00:32:41.008 Max Number of Namespaces: 0 00:32:41.008 Max Number of I/O Queues: 1024 00:32:41.008 NVMe Specification Version (VS): 1.3 00:32:41.008 NVMe Specification Version (Identify): 1.3 00:32:41.008 Maximum Queue Entries: 1024 00:32:41.008 Contiguous Queues Required: No 00:32:41.008 Arbitration Mechanisms Supported 00:32:41.008 Weighted Round Robin: Not Supported 00:32:41.008 Vendor Specific: Not Supported 00:32:41.008 Reset Timeout: 7500 ms 00:32:41.008 Doorbell Stride: 4 bytes 00:32:41.008 NVM Subsystem Reset: Not Supported 00:32:41.008 Command Sets Supported 00:32:41.008 NVM Command Set: Supported 00:32:41.008 Boot Partition: Not Supported 00:32:41.008 Memory Page Size Minimum: 4096 bytes 00:32:41.008 Memory Page Size Maximum: 4096 bytes 00:32:41.008 Persistent Memory Region: Not Supported 00:32:41.008 Optional Asynchronous Events Supported 00:32:41.008 Namespace Attribute Notices: Not Supported 00:32:41.008 Firmware Activation Notices: Not Supported 00:32:41.008 ANA Change Notices: Not Supported 00:32:41.008 PLE Aggregate Log Change Notices: Not Supported 00:32:41.008 LBA Status Info Alert Notices: Not Supported 00:32:41.008 EGE Aggregate Log Change Notices: Not Supported 00:32:41.008 Normal NVM Subsystem Shutdown event: Not Supported 00:32:41.008 Zone Descriptor Change Notices: Not Supported 00:32:41.008 Discovery Log Change Notices: Supported 00:32:41.008 Controller Attributes 00:32:41.008 128-bit Host Identifier: Not Supported 00:32:41.008 Non-Operational Permissive Mode: Not Supported 00:32:41.008 NVM Sets: Not Supported 00:32:41.008 Read Recovery Levels: Not Supported 00:32:41.008 Endurance Groups: Not Supported 00:32:41.008 Predictable Latency Mode: Not Supported 00:32:41.008 Traffic Based Keep ALive: Not Supported 00:32:41.008 Namespace Granularity: Not Supported 00:32:41.008 SQ Associations: Not Supported 00:32:41.008 UUID List: Not Supported 00:32:41.008 Multi-Domain Subsystem: Not Supported 00:32:41.008 Fixed Capacity Management: Not Supported 00:32:41.008 Variable Capacity Management: Not Supported 00:32:41.008 Delete Endurance Group: Not Supported 00:32:41.008 Delete NVM Set: Not Supported 00:32:41.008 Extended LBA Formats Supported: Not Supported 00:32:41.008 Flexible Data Placement Supported: Not Supported 00:32:41.008 00:32:41.008 Controller Memory Buffer Support 00:32:41.008 ================================ 00:32:41.008 Supported: No 00:32:41.008 00:32:41.008 Persistent Memory Region Support 00:32:41.008 ================================ 00:32:41.008 Supported: No 00:32:41.008 00:32:41.008 Admin Command Set Attributes 00:32:41.008 ============================ 00:32:41.008 Security Send/Receive: Not Supported 00:32:41.008 Format NVM: Not Supported 00:32:41.008 Firmware Activate/Download: Not Supported 00:32:41.008 Namespace Management: Not Supported 00:32:41.008 Device Self-Test: Not Supported 00:32:41.008 Directives: Not Supported 00:32:41.008 NVMe-MI: Not Supported 00:32:41.008 Virtualization Management: Not Supported 00:32:41.008 Doorbell Buffer Config: Not Supported 00:32:41.008 Get LBA Status Capability: Not Supported 00:32:41.008 Command & Feature Lockdown Capability: Not Supported 00:32:41.008 Abort Command Limit: 1 00:32:41.008 Async Event Request Limit: 1 00:32:41.008 Number of Firmware Slots: N/A 00:32:41.008 Firmware Slot 1 Read-Only: N/A 00:32:41.008 Firmware Activation Without Reset: N/A 00:32:41.008 Multiple Update Detection Support: N/A 00:32:41.008 Firmware Update Granularity: No Information Provided 00:32:41.008 Per-Namespace SMART Log: No 00:32:41.008 Asymmetric Namespace Access Log Page: Not Supported 00:32:41.008 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:41.008 Command Effects Log Page: Not Supported 00:32:41.008 Get Log Page Extended Data: Supported 00:32:41.008 Telemetry Log Pages: Not Supported 00:32:41.008 Persistent Event Log Pages: Not Supported 00:32:41.008 Supported Log Pages Log Page: May Support 00:32:41.008 Commands Supported & Effects Log Page: Not Supported 00:32:41.008 Feature Identifiers & Effects Log Page:May Support 00:32:41.008 NVMe-MI Commands & Effects Log Page: May Support 00:32:41.008 Data Area 4 for Telemetry Log: Not Supported 00:32:41.008 Error Log Page Entries Supported: 1 00:32:41.008 Keep Alive: Not Supported 00:32:41.008 00:32:41.008 NVM Command Set Attributes 00:32:41.008 ========================== 00:32:41.008 Submission Queue Entry Size 00:32:41.008 Max: 1 00:32:41.008 Min: 1 00:32:41.008 Completion Queue Entry Size 00:32:41.008 Max: 1 00:32:41.008 Min: 1 00:32:41.008 Number of Namespaces: 0 00:32:41.008 Compare Command: Not Supported 00:32:41.008 Write Uncorrectable Command: Not Supported 00:32:41.008 Dataset Management Command: Not Supported 00:32:41.008 Write Zeroes Command: Not Supported 00:32:41.008 Set Features Save Field: Not Supported 00:32:41.008 Reservations: Not Supported 00:32:41.008 Timestamp: Not Supported 00:32:41.008 Copy: Not Supported 00:32:41.008 Volatile Write Cache: Not Present 00:32:41.008 Atomic Write Unit (Normal): 1 00:32:41.008 Atomic Write Unit (PFail): 1 00:32:41.008 Atomic Compare & Write Unit: 1 00:32:41.008 Fused Compare & Write: Not Supported 00:32:41.008 Scatter-Gather List 00:32:41.008 SGL Command Set: Supported 00:32:41.008 SGL Keyed: Not Supported 00:32:41.008 SGL Bit Bucket Descriptor: Not Supported 00:32:41.008 SGL Metadata Pointer: Not Supported 00:32:41.008 Oversized SGL: Not Supported 00:32:41.008 SGL Metadata Address: Not Supported 00:32:41.008 SGL Offset: Supported 00:32:41.008 Transport SGL Data Block: Not Supported 00:32:41.008 Replay Protected Memory Block: Not Supported 00:32:41.008 00:32:41.008 Firmware Slot Information 00:32:41.008 ========================= 00:32:41.008 Active slot: 0 00:32:41.008 00:32:41.008 00:32:41.008 Error Log 00:32:41.008 ========= 00:32:41.008 00:32:41.008 Active Namespaces 00:32:41.008 ================= 00:32:41.008 Discovery Log Page 00:32:41.008 ================== 00:32:41.008 Generation Counter: 2 00:32:41.008 Number of Records: 2 00:32:41.008 Record Format: 0 00:32:41.008 00:32:41.008 Discovery Log Entry 0 00:32:41.008 ---------------------- 00:32:41.008 Transport Type: 3 (TCP) 00:32:41.008 Address Family: 1 (IPv4) 00:32:41.008 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:41.008 Entry Flags: 00:32:41.008 Duplicate Returned Information: 0 00:32:41.008 Explicit Persistent Connection Support for Discovery: 0 00:32:41.008 Transport Requirements: 00:32:41.008 Secure Channel: Not Specified 00:32:41.008 Port ID: 1 (0x0001) 00:32:41.008 Controller ID: 65535 (0xffff) 00:32:41.008 Admin Max SQ Size: 32 00:32:41.009 Transport Service Identifier: 4420 00:32:41.009 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:41.009 Transport Address: 10.0.0.1 00:32:41.009 Discovery Log Entry 1 00:32:41.009 ---------------------- 00:32:41.009 Transport Type: 3 (TCP) 00:32:41.009 Address Family: 1 (IPv4) 00:32:41.009 Subsystem Type: 2 (NVM Subsystem) 00:32:41.009 Entry Flags: 00:32:41.009 Duplicate Returned Information: 0 00:32:41.009 Explicit Persistent Connection Support for Discovery: 0 00:32:41.009 Transport Requirements: 00:32:41.009 Secure Channel: Not Specified 00:32:41.009 Port ID: 1 (0x0001) 00:32:41.009 Controller ID: 65535 (0xffff) 00:32:41.009 Admin Max SQ Size: 32 00:32:41.009 Transport Service Identifier: 4420 00:32:41.009 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:41.009 Transport Address: 10.0.0.1 00:32:41.009 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:41.009 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.009 get_feature(0x01) failed 00:32:41.009 get_feature(0x02) failed 00:32:41.009 get_feature(0x04) failed 00:32:41.009 ===================================================== 00:32:41.009 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:41.009 ===================================================== 00:32:41.009 Controller Capabilities/Features 00:32:41.009 ================================ 00:32:41.009 Vendor ID: 0000 00:32:41.009 Subsystem Vendor ID: 0000 00:32:41.009 Serial Number: d38981aeb1c9f4be4253 00:32:41.009 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:41.009 Firmware Version: 6.7.0-68 00:32:41.009 Recommended Arb Burst: 6 00:32:41.009 IEEE OUI Identifier: 00 00 00 00:32:41.009 Multi-path I/O 00:32:41.009 May have multiple subsystem ports: Yes 00:32:41.009 May have multiple controllers: Yes 00:32:41.009 Associated with SR-IOV VF: No 00:32:41.009 Max Data Transfer Size: Unlimited 00:32:41.009 Max Number of Namespaces: 1024 00:32:41.009 Max Number of I/O Queues: 128 00:32:41.009 NVMe Specification Version (VS): 1.3 00:32:41.009 NVMe Specification Version (Identify): 1.3 00:32:41.009 Maximum Queue Entries: 1024 00:32:41.009 Contiguous Queues Required: No 00:32:41.009 Arbitration Mechanisms Supported 00:32:41.009 Weighted Round Robin: Not Supported 00:32:41.009 Vendor Specific: Not Supported 00:32:41.009 Reset Timeout: 7500 ms 00:32:41.009 Doorbell Stride: 4 bytes 00:32:41.009 NVM Subsystem Reset: Not Supported 00:32:41.009 Command Sets Supported 00:32:41.009 NVM Command Set: Supported 00:32:41.009 Boot Partition: Not Supported 00:32:41.009 Memory Page Size Minimum: 4096 bytes 00:32:41.009 Memory Page Size Maximum: 4096 bytes 00:32:41.009 Persistent Memory Region: Not Supported 00:32:41.009 Optional Asynchronous Events Supported 00:32:41.009 Namespace Attribute Notices: Supported 00:32:41.009 Firmware Activation Notices: Not Supported 00:32:41.009 ANA Change Notices: Supported 00:32:41.009 PLE Aggregate Log Change Notices: Not Supported 00:32:41.009 LBA Status Info Alert Notices: Not Supported 00:32:41.009 EGE Aggregate Log Change Notices: Not Supported 00:32:41.009 Normal NVM Subsystem Shutdown event: Not Supported 00:32:41.009 Zone Descriptor Change Notices: Not Supported 00:32:41.009 Discovery Log Change Notices: Not Supported 00:32:41.009 Controller Attributes 00:32:41.009 128-bit Host Identifier: Supported 00:32:41.009 Non-Operational Permissive Mode: Not Supported 00:32:41.009 NVM Sets: Not Supported 00:32:41.009 Read Recovery Levels: Not Supported 00:32:41.009 Endurance Groups: Not Supported 00:32:41.009 Predictable Latency Mode: Not Supported 00:32:41.009 Traffic Based Keep ALive: Supported 00:32:41.009 Namespace Granularity: Not Supported 00:32:41.009 SQ Associations: Not Supported 00:32:41.009 UUID List: Not Supported 00:32:41.009 Multi-Domain Subsystem: Not Supported 00:32:41.009 Fixed Capacity Management: Not Supported 00:32:41.009 Variable Capacity Management: Not Supported 00:32:41.009 Delete Endurance Group: Not Supported 00:32:41.009 Delete NVM Set: Not Supported 00:32:41.009 Extended LBA Formats Supported: Not Supported 00:32:41.009 Flexible Data Placement Supported: Not Supported 00:32:41.009 00:32:41.009 Controller Memory Buffer Support 00:32:41.009 ================================ 00:32:41.009 Supported: No 00:32:41.009 00:32:41.009 Persistent Memory Region Support 00:32:41.009 ================================ 00:32:41.009 Supported: No 00:32:41.009 00:32:41.009 Admin Command Set Attributes 00:32:41.009 ============================ 00:32:41.009 Security Send/Receive: Not Supported 00:32:41.009 Format NVM: Not Supported 00:32:41.009 Firmware Activate/Download: Not Supported 00:32:41.009 Namespace Management: Not Supported 00:32:41.009 Device Self-Test: Not Supported 00:32:41.009 Directives: Not Supported 00:32:41.009 NVMe-MI: Not Supported 00:32:41.009 Virtualization Management: Not Supported 00:32:41.009 Doorbell Buffer Config: Not Supported 00:32:41.009 Get LBA Status Capability: Not Supported 00:32:41.009 Command & Feature Lockdown Capability: Not Supported 00:32:41.009 Abort Command Limit: 4 00:32:41.009 Async Event Request Limit: 4 00:32:41.009 Number of Firmware Slots: N/A 00:32:41.009 Firmware Slot 1 Read-Only: N/A 00:32:41.009 Firmware Activation Without Reset: N/A 00:32:41.009 Multiple Update Detection Support: N/A 00:32:41.009 Firmware Update Granularity: No Information Provided 00:32:41.009 Per-Namespace SMART Log: Yes 00:32:41.009 Asymmetric Namespace Access Log Page: Supported 00:32:41.009 ANA Transition Time : 10 sec 00:32:41.009 00:32:41.009 Asymmetric Namespace Access Capabilities 00:32:41.009 ANA Optimized State : Supported 00:32:41.009 ANA Non-Optimized State : Supported 00:32:41.009 ANA Inaccessible State : Supported 00:32:41.009 ANA Persistent Loss State : Supported 00:32:41.009 ANA Change State : Supported 00:32:41.009 ANAGRPID is not changed : No 00:32:41.009 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:41.009 00:32:41.009 ANA Group Identifier Maximum : 128 00:32:41.009 Number of ANA Group Identifiers : 128 00:32:41.009 Max Number of Allowed Namespaces : 1024 00:32:41.009 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:41.009 Command Effects Log Page: Supported 00:32:41.009 Get Log Page Extended Data: Supported 00:32:41.009 Telemetry Log Pages: Not Supported 00:32:41.009 Persistent Event Log Pages: Not Supported 00:32:41.009 Supported Log Pages Log Page: May Support 00:32:41.009 Commands Supported & Effects Log Page: Not Supported 00:32:41.009 Feature Identifiers & Effects Log Page:May Support 00:32:41.009 NVMe-MI Commands & Effects Log Page: May Support 00:32:41.009 Data Area 4 for Telemetry Log: Not Supported 00:32:41.009 Error Log Page Entries Supported: 128 00:32:41.009 Keep Alive: Supported 00:32:41.009 Keep Alive Granularity: 1000 ms 00:32:41.009 00:32:41.009 NVM Command Set Attributes 00:32:41.009 ========================== 00:32:41.009 Submission Queue Entry Size 00:32:41.009 Max: 64 00:32:41.009 Min: 64 00:32:41.009 Completion Queue Entry Size 00:32:41.009 Max: 16 00:32:41.009 Min: 16 00:32:41.009 Number of Namespaces: 1024 00:32:41.009 Compare Command: Not Supported 00:32:41.009 Write Uncorrectable Command: Not Supported 00:32:41.009 Dataset Management Command: Supported 00:32:41.009 Write Zeroes Command: Supported 00:32:41.009 Set Features Save Field: Not Supported 00:32:41.009 Reservations: Not Supported 00:32:41.009 Timestamp: Not Supported 00:32:41.009 Copy: Not Supported 00:32:41.009 Volatile Write Cache: Present 00:32:41.009 Atomic Write Unit (Normal): 1 00:32:41.009 Atomic Write Unit (PFail): 1 00:32:41.009 Atomic Compare & Write Unit: 1 00:32:41.009 Fused Compare & Write: Not Supported 00:32:41.009 Scatter-Gather List 00:32:41.009 SGL Command Set: Supported 00:32:41.009 SGL Keyed: Not Supported 00:32:41.009 SGL Bit Bucket Descriptor: Not Supported 00:32:41.009 SGL Metadata Pointer: Not Supported 00:32:41.009 Oversized SGL: Not Supported 00:32:41.009 SGL Metadata Address: Not Supported 00:32:41.009 SGL Offset: Supported 00:32:41.009 Transport SGL Data Block: Not Supported 00:32:41.009 Replay Protected Memory Block: Not Supported 00:32:41.009 00:32:41.009 Firmware Slot Information 00:32:41.009 ========================= 00:32:41.009 Active slot: 0 00:32:41.009 00:32:41.009 Asymmetric Namespace Access 00:32:41.009 =========================== 00:32:41.009 Change Count : 0 00:32:41.009 Number of ANA Group Descriptors : 1 00:32:41.009 ANA Group Descriptor : 0 00:32:41.009 ANA Group ID : 1 00:32:41.009 Number of NSID Values : 1 00:32:41.009 Change Count : 0 00:32:41.010 ANA State : 1 00:32:41.010 Namespace Identifier : 1 00:32:41.010 00:32:41.010 Commands Supported and Effects 00:32:41.010 ============================== 00:32:41.010 Admin Commands 00:32:41.010 -------------- 00:32:41.010 Get Log Page (02h): Supported 00:32:41.010 Identify (06h): Supported 00:32:41.010 Abort (08h): Supported 00:32:41.010 Set Features (09h): Supported 00:32:41.010 Get Features (0Ah): Supported 00:32:41.010 Asynchronous Event Request (0Ch): Supported 00:32:41.010 Keep Alive (18h): Supported 00:32:41.010 I/O Commands 00:32:41.010 ------------ 00:32:41.010 Flush (00h): Supported 00:32:41.010 Write (01h): Supported LBA-Change 00:32:41.010 Read (02h): Supported 00:32:41.010 Write Zeroes (08h): Supported LBA-Change 00:32:41.010 Dataset Management (09h): Supported 00:32:41.010 00:32:41.010 Error Log 00:32:41.010 ========= 00:32:41.010 Entry: 0 00:32:41.010 Error Count: 0x3 00:32:41.010 Submission Queue Id: 0x0 00:32:41.010 Command Id: 0x5 00:32:41.010 Phase Bit: 0 00:32:41.010 Status Code: 0x2 00:32:41.010 Status Code Type: 0x0 00:32:41.010 Do Not Retry: 1 00:32:41.010 Error Location: 0x28 00:32:41.010 LBA: 0x0 00:32:41.010 Namespace: 0x0 00:32:41.010 Vendor Log Page: 0x0 00:32:41.010 ----------- 00:32:41.010 Entry: 1 00:32:41.010 Error Count: 0x2 00:32:41.010 Submission Queue Id: 0x0 00:32:41.010 Command Id: 0x5 00:32:41.010 Phase Bit: 0 00:32:41.010 Status Code: 0x2 00:32:41.010 Status Code Type: 0x0 00:32:41.010 Do Not Retry: 1 00:32:41.010 Error Location: 0x28 00:32:41.010 LBA: 0x0 00:32:41.010 Namespace: 0x0 00:32:41.010 Vendor Log Page: 0x0 00:32:41.010 ----------- 00:32:41.010 Entry: 2 00:32:41.010 Error Count: 0x1 00:32:41.010 Submission Queue Id: 0x0 00:32:41.010 Command Id: 0x4 00:32:41.010 Phase Bit: 0 00:32:41.010 Status Code: 0x2 00:32:41.010 Status Code Type: 0x0 00:32:41.010 Do Not Retry: 1 00:32:41.010 Error Location: 0x28 00:32:41.010 LBA: 0x0 00:32:41.010 Namespace: 0x0 00:32:41.010 Vendor Log Page: 0x0 00:32:41.010 00:32:41.010 Number of Queues 00:32:41.010 ================ 00:32:41.010 Number of I/O Submission Queues: 128 00:32:41.010 Number of I/O Completion Queues: 128 00:32:41.010 00:32:41.010 ZNS Specific Controller Data 00:32:41.010 ============================ 00:32:41.010 Zone Append Size Limit: 0 00:32:41.010 00:32:41.010 00:32:41.010 Active Namespaces 00:32:41.010 ================= 00:32:41.010 get_feature(0x05) failed 00:32:41.010 Namespace ID:1 00:32:41.010 Command Set Identifier: NVM (00h) 00:32:41.010 Deallocate: Supported 00:32:41.010 Deallocated/Unwritten Error: Not Supported 00:32:41.010 Deallocated Read Value: Unknown 00:32:41.010 Deallocate in Write Zeroes: Not Supported 00:32:41.010 Deallocated Guard Field: 0xFFFF 00:32:41.010 Flush: Supported 00:32:41.010 Reservation: Not Supported 00:32:41.010 Namespace Sharing Capabilities: Multiple Controllers 00:32:41.010 Size (in LBAs): 1953525168 (931GiB) 00:32:41.010 Capacity (in LBAs): 1953525168 (931GiB) 00:32:41.010 Utilization (in LBAs): 1953525168 (931GiB) 00:32:41.010 UUID: 0fe4fbef-5343-4ff5-800d-99da47dc496c 00:32:41.010 Thin Provisioning: Not Supported 00:32:41.010 Per-NS Atomic Units: Yes 00:32:41.010 Atomic Boundary Size (Normal): 0 00:32:41.010 Atomic Boundary Size (PFail): 0 00:32:41.010 Atomic Boundary Offset: 0 00:32:41.010 NGUID/EUI64 Never Reused: No 00:32:41.010 ANA group ID: 1 00:32:41.010 Namespace Write Protected: No 00:32:41.010 Number of LBA Formats: 1 00:32:41.010 Current LBA Format: LBA Format #00 00:32:41.010 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:41.010 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:41.010 rmmod nvme_tcp 00:32:41.010 rmmod nvme_fabrics 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:41.010 03:32:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:43.543 03:32:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:44.477 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:44.477 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:44.477 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:44.477 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:44.477 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:44.477 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:44.477 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:44.477 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:44.477 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:44.477 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:44.477 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:44.477 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:44.477 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:44.477 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:44.477 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:44.477 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:45.412 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:45.412 00:32:45.412 real 0m9.322s 00:32:45.412 user 0m1.949s 00:32:45.412 sys 0m3.324s 00:32:45.412 03:32:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:45.412 03:32:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.412 ************************************ 00:32:45.412 END TEST nvmf_identify_kernel_target 00:32:45.412 ************************************ 00:32:45.670 03:32:12 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:45.670 03:32:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:45.670 03:32:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:45.670 03:32:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.670 ************************************ 00:32:45.670 START TEST nvmf_auth_host 00:32:45.670 ************************************ 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:45.670 * Looking for test storage... 00:32:45.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.670 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:45.671 03:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.573 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.573 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:47.573 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:47.573 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:47.573 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:47.573 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:47.573 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:47.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:47.574 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:47.574 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:47.574 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:47.574 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:47.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:32:47.833 00:32:47.833 --- 10.0.0.2 ping statistics --- 00:32:47.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.833 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:32:47.833 00:32:47.833 --- 10.0.0.1 ping statistics --- 00:32:47.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.833 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=577036 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 577036 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 577036 ']' 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:47.833 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=084bf1406018ee588779492ec961f37d 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vFq 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 084bf1406018ee588779492ec961f37d 0 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 084bf1406018ee588779492ec961f37d 0 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=084bf1406018ee588779492ec961f37d 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vFq 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vFq 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.vFq 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=70a9b55f6b211f1e66bd2fc256c3723ec980a8e22fe10455deabacf96f4c3668 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7rU 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 70a9b55f6b211f1e66bd2fc256c3723ec980a8e22fe10455deabacf96f4c3668 3 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 70a9b55f6b211f1e66bd2fc256c3723ec980a8e22fe10455deabacf96f4c3668 3 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=70a9b55f6b211f1e66bd2fc256c3723ec980a8e22fe10455deabacf96f4c3668 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7rU 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7rU 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7rU 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:48.092 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f492d320136aeb43d76eeb4c2f70d900a1d658958e22576 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9NZ 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f492d320136aeb43d76eeb4c2f70d900a1d658958e22576 0 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f492d320136aeb43d76eeb4c2f70d900a1d658958e22576 0 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f492d320136aeb43d76eeb4c2f70d900a1d658958e22576 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9NZ 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9NZ 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9NZ 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8aad1786d24fd8c1b532e566bcebe0f5dfa285a28a7d1863 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7Hu 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8aad1786d24fd8c1b532e566bcebe0f5dfa285a28a7d1863 2 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8aad1786d24fd8c1b532e566bcebe0f5dfa285a28a7d1863 2 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8aad1786d24fd8c1b532e566bcebe0f5dfa285a28a7d1863 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7Hu 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7Hu 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7Hu 00:32:48.350 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c8d3a4c89f7994e6b228b515944e854a 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SIQ 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c8d3a4c89f7994e6b228b515944e854a 1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c8d3a4c89f7994e6b228b515944e854a 1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c8d3a4c89f7994e6b228b515944e854a 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SIQ 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SIQ 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.SIQ 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2138edac0bb641b1bc860543f18af832 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5DG 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2138edac0bb641b1bc860543f18af832 1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2138edac0bb641b1bc860543f18af832 1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2138edac0bb641b1bc860543f18af832 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5DG 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5DG 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.5DG 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd845d1739de5d1733c4203741b80cf959333c0c9e081cea 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Nht 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd845d1739de5d1733c4203741b80cf959333c0c9e081cea 2 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd845d1739de5d1733c4203741b80cf959333c0c9e081cea 2 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd845d1739de5d1733c4203741b80cf959333c0c9e081cea 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Nht 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Nht 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Nht 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e4ed72ffdf57127d67a2961c92ee7578 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.K4k 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e4ed72ffdf57127d67a2961c92ee7578 0 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e4ed72ffdf57127d67a2961c92ee7578 0 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e4ed72ffdf57127d67a2961c92ee7578 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:48.351 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.K4k 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.K4k 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.K4k 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b5df07d710e94248e57798ff40318f26d0422312beac08d09715d8995dbc7a2d 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.njI 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b5df07d710e94248e57798ff40318f26d0422312beac08d09715d8995dbc7a2d 3 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b5df07d710e94248e57798ff40318f26d0422312beac08d09715d8995dbc7a2d 3 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b5df07d710e94248e57798ff40318f26d0422312beac08d09715d8995dbc7a2d 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.njI 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.njI 00:32:48.616 03:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.njI 00:32:48.616 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:48.616 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 577036 00:32:48.616 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 577036 ']' 00:32:48.616 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.616 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:48.616 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.616 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:48.616 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vFq 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7rU ]] 00:32:48.922 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7rU 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9NZ 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7Hu ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Hu 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.SIQ 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.5DG ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5DG 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Nht 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.K4k ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.K4k 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.njI 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:48.923 03:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:49.857 Waiting for block devices as requested 00:32:49.857 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:50.115 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:50.115 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:50.372 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:50.372 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:50.372 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:50.372 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:50.629 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:50.629 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:50.629 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:50.886 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:50.886 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:50.886 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:50.886 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:51.144 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:51.144 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:51.144 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:51.711 No valid GPT data, bailing 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:51.711 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:51.712 00:32:51.712 Discovery Log Number of Records 2, Generation counter 2 00:32:51.712 =====Discovery Log Entry 0====== 00:32:51.712 trtype: tcp 00:32:51.712 adrfam: ipv4 00:32:51.712 subtype: current discovery subsystem 00:32:51.712 treq: not specified, sq flow control disable supported 00:32:51.712 portid: 1 00:32:51.712 trsvcid: 4420 00:32:51.712 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:51.712 traddr: 10.0.0.1 00:32:51.712 eflags: none 00:32:51.712 sectype: none 00:32:51.712 =====Discovery Log Entry 1====== 00:32:51.712 trtype: tcp 00:32:51.712 adrfam: ipv4 00:32:51.712 subtype: nvme subsystem 00:32:51.712 treq: not specified, sq flow control disable supported 00:32:51.712 portid: 1 00:32:51.712 trsvcid: 4420 00:32:51.712 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:51.712 traddr: 10.0.0.1 00:32:51.712 eflags: none 00:32:51.712 sectype: none 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.712 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.971 nvme0n1 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.971 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.230 nvme0n1 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.230 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.489 nvme0n1 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.489 03:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.490 03:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.490 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.490 03:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.748 nvme0n1 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.748 nvme0n1 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.748 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.006 nvme0n1 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:53.006 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:53.007 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.007 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.007 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.007 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:53.007 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.007 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:53.007 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.007 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.265 nvme0n1 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.265 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.524 03:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.524 nvme0n1 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:32:53.524 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.782 nvme0n1 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.782 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.040 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.040 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.040 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:54.040 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.041 nvme0n1 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.041 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 nvme0n1 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:54.299 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:54.300 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:32:54.300 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:54.300 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:54.300 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.300 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.300 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:54.300 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.300 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.557 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.558 03:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.816 nvme0n1 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.816 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.075 nvme0n1 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.075 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.333 nvme0n1 00:32:55.333 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.333 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.333 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.333 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.333 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.333 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.591 03:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.850 nvme0n1 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.850 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.109 nvme0n1 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.109 03:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.675 nvme0n1 00:32:56.675 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.675 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.675 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.675 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.675 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.675 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.933 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.500 nvme0n1 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.500 03:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.066 nvme0n1 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.066 03:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.633 nvme0n1 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.633 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.200 nvme0n1 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.200 03:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.574 nvme0n1 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.574 03:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.509 nvme0n1 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.509 03:32:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.443 nvme0n1 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.444 03:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.381 nvme0n1 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.381 03:32:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.366 nvme0n1 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.366 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.367 03:32:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.625 nvme0n1 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.625 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.883 nvme0n1 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:04.883 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.884 nvme0n1 00:33:04.884 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.142 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.142 nvme0n1 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.143 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.401 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.402 nvme0n1 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.402 03:32:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.661 nvme0n1 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.661 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.919 nvme0n1 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.919 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.920 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 nvme0n1 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.178 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.437 nvme0n1 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.437 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.438 03:32:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.438 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.438 03:32:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.438 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.697 nvme0n1 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.697 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.955 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.213 nvme0n1 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.213 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.214 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.472 nvme0n1 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.472 03:32:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.472 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.731 nvme0n1 00:33:07.731 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.731 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.731 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.731 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.731 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.731 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.989 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.248 nvme0n1 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.248 03:32:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.506 nvme0n1 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.506 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.765 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.331 nvme0n1 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.331 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.332 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.332 03:32:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.332 03:32:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.332 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.332 03:32:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.898 nvme0n1 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:09.898 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.899 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.463 nvme0n1 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.463 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.464 03:32:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.029 nvme0n1 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.029 03:32:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.594 nvme0n1 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.594 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.852 03:32:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 nvme0n1 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.785 03:32:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 nvme0n1 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:13.719 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.720 03:32:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.093 nvme0n1 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.093 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.094 03:32:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.094 03:32:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.094 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.094 03:32:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.660 nvme0n1 00:33:15.660 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.660 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.660 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.660 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.660 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.918 03:32:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.919 03:32:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.919 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.919 03:32:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.858 nvme0n1 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.858 nvme0n1 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.858 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.118 nvme0n1 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.118 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.376 nvme0n1 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.376 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.377 03:32:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.635 nvme0n1 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.635 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.636 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.894 nvme0n1 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.894 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.153 nvme0n1 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.153 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.446 nvme0n1 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.446 03:32:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.704 nvme0n1 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.705 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.963 nvme0n1 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.963 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.964 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.222 nvme0n1 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.222 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.481 nvme0n1 00:33:19.481 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.481 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.481 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.481 03:32:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.481 03:32:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.481 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.481 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.481 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.481 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.481 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.739 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.740 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.998 nvme0n1 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.998 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.257 nvme0n1 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:20.257 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.258 03:32:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.516 nvme0n1 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.516 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.774 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.032 nvme0n1 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.032 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.033 03:32:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.598 nvme0n1 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.598 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.164 nvme0n1 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.164 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.165 03:32:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.731 nvme0n1 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.731 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.989 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.556 nvme0n1 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.556 03:32:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.123 nvme0n1 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg0YmYxNDA2MDE4ZWU1ODg3Nzk0OTJlYzk2MWYzN2Rdq5sd: 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzBhOWI1NWY2YjIxMWYxZTY2YmQyZmMyNTZjMzcyM2VjOTgwYThlMjJmZTEwNDU1ZGVhYmFjZjk2ZjRjMzY2OKDPY7w=: 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.123 03:32:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.055 nvme0n1 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.055 03:32:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.989 nvme0n1 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkM2E0Yzg5Zjc5OTRlNmIyMjhiNTE1OTQ0ZTg1NGHGApos: 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: ]] 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzOGVkYWMwYmI2NDFiMWJjODYwNTQzZjE4YWY4MzImyUrp: 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.989 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.990 03:32:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.923 nvme0n1 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.923 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4NDVkMTczOWRlNWQxNzMzYzQyMDM3NDFiODBjZjk1OTMzM2MwYzllMDgxY2VhHjlkag==: 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: ]] 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRlZDcyZmZkZjU3MTI3ZDY3YTI5NjFjOTJlZTc1Nzi0OB5+: 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.181 03:32:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.114 nvme0n1 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVkZjA3ZDcxMGU5NDI0OGU1Nzc5OGZmNDAzMThmMjZkMDQyMjMxMmJlYWMwOGQwOTcxNWQ4OTk1ZGJjN2EyZCU2HfA=: 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.114 03:32:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.046 nvme0n1 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWY0OTJkMzIwMTM2YWViNDNkNzZlZWI0YzJmNzBkOTAwYTFkNjU4OTU4ZTIyNTc2nLxsXw==: 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFhZDE3ODZkMjRmZDhjMWI1MzJlNTY2YmNlYmUwZjVkZmEyODVhMjhhN2QxODYzbOwvsg==: 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.047 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.305 request: 00:33:29.305 { 00:33:29.305 "name": "nvme0", 00:33:29.305 "trtype": "tcp", 00:33:29.305 "traddr": "10.0.0.1", 00:33:29.305 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:29.305 "adrfam": "ipv4", 00:33:29.305 "trsvcid": "4420", 00:33:29.305 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:29.305 "method": "bdev_nvme_attach_controller", 00:33:29.305 "req_id": 1 00:33:29.305 } 00:33:29.305 Got JSON-RPC error response 00:33:29.305 response: 00:33:29.305 { 00:33:29.305 "code": -5, 00:33:29.305 "message": "Input/output error" 00:33:29.305 } 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.305 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.306 request: 00:33:29.306 { 00:33:29.306 "name": "nvme0", 00:33:29.306 "trtype": "tcp", 00:33:29.306 "traddr": "10.0.0.1", 00:33:29.306 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:29.306 "adrfam": "ipv4", 00:33:29.306 "trsvcid": "4420", 00:33:29.306 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:29.306 "dhchap_key": "key2", 00:33:29.306 "method": "bdev_nvme_attach_controller", 00:33:29.306 "req_id": 1 00:33:29.306 } 00:33:29.306 Got JSON-RPC error response 00:33:29.306 response: 00:33:29.306 { 00:33:29.306 "code": -5, 00:33:29.306 "message": "Input/output error" 00:33:29.306 } 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.306 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.564 request: 00:33:29.564 { 00:33:29.564 "name": "nvme0", 00:33:29.564 "trtype": "tcp", 00:33:29.564 "traddr": "10.0.0.1", 00:33:29.564 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:29.564 "adrfam": "ipv4", 00:33:29.564 "trsvcid": "4420", 00:33:29.564 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:29.564 "dhchap_key": "key1", 00:33:29.564 "dhchap_ctrlr_key": "ckey2", 00:33:29.564 "method": "bdev_nvme_attach_controller", 00:33:29.564 "req_id": 1 00:33:29.564 } 00:33:29.564 Got JSON-RPC error response 00:33:29.564 response: 00:33:29.564 { 00:33:29.564 "code": -5, 00:33:29.564 "message": "Input/output error" 00:33:29.564 } 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:29.564 rmmod nvme_tcp 00:33:29.564 rmmod nvme_fabrics 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 577036 ']' 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 577036 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 577036 ']' 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 577036 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:29.564 03:32:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 577036 00:33:29.564 03:32:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:29.564 03:32:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:29.565 03:32:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 577036' 00:33:29.565 killing process with pid 577036 00:33:29.565 03:32:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 577036 00:33:29.565 03:32:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 577036 00:33:29.824 03:32:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:29.824 03:32:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:29.824 03:32:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:29.824 03:32:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:29.824 03:32:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:29.824 03:32:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.824 03:32:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:29.824 03:32:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:31.727 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:31.987 03:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:32.922 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:32.922 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:32.922 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:33.180 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:33.180 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:33.180 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:33.180 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:33.180 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:33.180 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:33.180 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:33.180 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:33.180 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:33.180 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:33.180 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:33.180 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:33.180 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:34.119 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:34.119 03:33:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.vFq /tmp/spdk.key-null.9NZ /tmp/spdk.key-sha256.SIQ /tmp/spdk.key-sha384.Nht /tmp/spdk.key-sha512.njI /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:34.119 03:33:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:35.545 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:35.545 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:35.545 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:35.545 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:35.545 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:35.545 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:35.545 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:35.545 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:35.545 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:35.545 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:35.545 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:35.545 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:35.545 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:35.545 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:35.545 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:35.545 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:35.545 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:35.545 00:33:35.545 real 0m49.851s 00:33:35.545 user 0m47.520s 00:33:35.545 sys 0m5.747s 00:33:35.545 03:33:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:35.545 03:33:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.545 ************************************ 00:33:35.545 END TEST nvmf_auth_host 00:33:35.545 ************************************ 00:33:35.545 03:33:01 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:35.545 03:33:01 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:35.545 03:33:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:35.545 03:33:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:35.545 03:33:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:35.545 ************************************ 00:33:35.545 START TEST nvmf_digest 00:33:35.545 ************************************ 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:35.545 * Looking for test storage... 00:33:35.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.545 03:33:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.546 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:35.546 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:35.546 03:33:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:35.546 03:33:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:37.444 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:37.444 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:37.444 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:37.444 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:37.444 03:33:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:37.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:37.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:33:37.702 00:33:37.702 --- 10.0.0.2 ping statistics --- 00:33:37.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.702 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:37.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:37.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:33:37.702 00:33:37.702 --- 10.0.0.1 ping statistics --- 00:33:37.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.702 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:37.702 03:33:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:37.702 ************************************ 00:33:37.702 START TEST nvmf_digest_clean 00:33:37.702 ************************************ 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=586712 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 586712 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 586712 ']' 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:37.703 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:37.703 [2024-07-23 03:33:04.197270] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:37.703 [2024-07-23 03:33:04.197343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.703 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.703 [2024-07-23 03:33:04.265581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.961 [2024-07-23 03:33:04.359629] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.961 [2024-07-23 03:33:04.359689] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.961 [2024-07-23 03:33:04.359705] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:37.961 [2024-07-23 03:33:04.359718] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:37.961 [2024-07-23 03:33:04.359729] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.961 [2024-07-23 03:33:04.359758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.961 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:38.219 null0 00:33:38.219 [2024-07-23 03:33:04.541901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.219 [2024-07-23 03:33:04.566102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=586734 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 586734 /var/tmp/bperf.sock 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 586734 ']' 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:38.219 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:38.219 [2024-07-23 03:33:04.613063] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:38.219 [2024-07-23 03:33:04.613139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid586734 ] 00:33:38.219 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.219 [2024-07-23 03:33:04.677664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.219 [2024-07-23 03:33:04.773356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.476 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:38.476 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:38.476 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:38.476 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:38.476 03:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:38.734 03:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.734 03:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.991 nvme0n1 00:33:38.991 03:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:38.991 03:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.991 Running I/O for 2 seconds... 00:33:41.527 00:33:41.527 Latency(us) 00:33:41.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.527 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:41.527 nvme0n1 : 2.01 18250.91 71.29 0.00 0.00 7005.91 3422.44 18058.81 00:33:41.527 =================================================================================================================== 00:33:41.527 Total : 18250.91 71.29 0.00 0.00 7005.91 3422.44 18058.81 00:33:41.527 0 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:41.527 | select(.opcode=="crc32c") 00:33:41.527 | "\(.module_name) \(.executed)"' 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 586734 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 586734 ']' 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 586734 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 586734 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 586734' 00:33:41.527 killing process with pid 586734 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 586734 00:33:41.527 Received shutdown signal, test time was about 2.000000 seconds 00:33:41.527 00:33:41.527 Latency(us) 00:33:41.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.527 =================================================================================================================== 00:33:41.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.527 03:33:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 586734 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=587461 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 587461 /var/tmp/bperf.sock 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 587461 ']' 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:41.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:41.786 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.786 [2024-07-23 03:33:08.164064] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:41.786 [2024-07-23 03:33:08.164139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587461 ] 00:33:41.786 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:41.786 Zero copy mechanism will not be used. 00:33:41.786 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.786 [2024-07-23 03:33:08.225200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.786 [2024-07-23 03:33:08.313997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.045 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:42.045 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:42.045 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:42.045 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:42.045 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:42.303 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.303 03:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.562 nvme0n1 00:33:42.562 03:33:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:42.562 03:33:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.820 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:42.820 Zero copy mechanism will not be used. 00:33:42.820 Running I/O for 2 seconds... 00:33:44.718 00:33:44.718 Latency(us) 00:33:44.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.718 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:44.718 nvme0n1 : 2.00 2809.88 351.24 0.00 0.00 5689.97 5072.97 12281.93 00:33:44.718 =================================================================================================================== 00:33:44.718 Total : 2809.88 351.24 0.00 0.00 5689.97 5072.97 12281.93 00:33:44.718 0 00:33:44.718 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:44.718 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:44.718 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:44.718 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:44.718 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:44.718 | select(.opcode=="crc32c") 00:33:44.718 | "\(.module_name) \(.executed)"' 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 587461 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 587461 ']' 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 587461 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 587461 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 587461' 00:33:44.977 killing process with pid 587461 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 587461 00:33:44.977 Received shutdown signal, test time was about 2.000000 seconds 00:33:44.977 00:33:44.977 Latency(us) 00:33:44.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.977 =================================================================================================================== 00:33:44.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.977 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 587461 00:33:45.234 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:45.234 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=588053 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 588053 /var/tmp/bperf.sock 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 588053 ']' 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:45.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:45.235 03:33:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:45.235 [2024-07-23 03:33:11.808718] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:45.235 [2024-07-23 03:33:11.808795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588053 ] 00:33:45.493 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.493 [2024-07-23 03:33:11.867769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.493 [2024-07-23 03:33:11.955493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.493 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:45.493 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:45.493 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:45.493 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:45.493 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:46.060 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.060 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.319 nvme0n1 00:33:46.319 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:46.319 03:33:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:46.580 Running I/O for 2 seconds... 00:33:48.482 00:33:48.482 Latency(us) 00:33:48.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.482 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:48.482 nvme0n1 : 2.01 20990.06 81.99 0.00 0.00 6088.45 3203.98 17379.18 00:33:48.482 =================================================================================================================== 00:33:48.482 Total : 20990.06 81.99 0.00 0.00 6088.45 3203.98 17379.18 00:33:48.482 0 00:33:48.482 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:48.482 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:48.482 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:48.482 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:48.482 | select(.opcode=="crc32c") 00:33:48.482 | "\(.module_name) \(.executed)"' 00:33:48.482 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 588053 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 588053 ']' 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 588053 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 588053 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 588053' 00:33:48.740 killing process with pid 588053 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 588053 00:33:48.740 Received shutdown signal, test time was about 2.000000 seconds 00:33:48.740 00:33:48.740 Latency(us) 00:33:48.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.740 =================================================================================================================== 00:33:48.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:48.740 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 588053 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=588578 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 588578 /var/tmp/bperf.sock 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 588578 ']' 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:48.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:48.998 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:48.998 [2024-07-23 03:33:15.558230] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:48.998 [2024-07-23 03:33:15.558305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588578 ] 00:33:48.998 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:48.998 Zero copy mechanism will not be used. 00:33:49.256 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.256 [2024-07-23 03:33:15.617228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.256 [2024-07-23 03:33:15.702822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.256 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:49.256 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:49.256 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:49.256 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:49.256 03:33:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:49.823 03:33:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.823 03:33:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:50.080 nvme0n1 00:33:50.080 03:33:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:50.080 03:33:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:50.080 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:50.081 Zero copy mechanism will not be used. 00:33:50.081 Running I/O for 2 seconds... 00:33:52.633 00:33:52.633 Latency(us) 00:33:52.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.633 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:52.633 nvme0n1 : 2.01 1877.49 234.69 0.00 0.00 8499.76 6650.69 16311.18 00:33:52.633 =================================================================================================================== 00:33:52.633 Total : 1877.49 234.69 0.00 0.00 8499.76 6650.69 16311.18 00:33:52.633 0 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:52.633 | select(.opcode=="crc32c") 00:33:52.633 | "\(.module_name) \(.executed)"' 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 588578 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 588578 ']' 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 588578 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 588578 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 588578' 00:33:52.633 killing process with pid 588578 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 588578 00:33:52.633 Received shutdown signal, test time was about 2.000000 seconds 00:33:52.633 00:33:52.633 Latency(us) 00:33:52.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.633 =================================================================================================================== 00:33:52.633 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:52.633 03:33:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 588578 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 586712 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 586712 ']' 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 586712 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 586712 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 586712' 00:33:52.633 killing process with pid 586712 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 586712 00:33:52.633 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 586712 00:33:52.891 00:33:52.891 real 0m15.268s 00:33:52.891 user 0m30.605s 00:33:52.891 sys 0m3.924s 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:52.891 ************************************ 00:33:52.891 END TEST nvmf_digest_clean 00:33:52.891 ************************************ 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.891 ************************************ 00:33:52.891 START TEST nvmf_digest_error 00:33:52.891 ************************************ 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:52.891 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.149 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=589018 00:33:53.149 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:53.149 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 589018 00:33:53.149 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 589018 ']' 00:33:53.149 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.150 [2024-07-23 03:33:19.516781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:53.150 [2024-07-23 03:33:19.516853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.150 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.150 [2024-07-23 03:33:19.583879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.150 [2024-07-23 03:33:19.674263] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.150 [2024-07-23 03:33:19.674325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.150 [2024-07-23 03:33:19.674341] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.150 [2024-07-23 03:33:19.674354] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.150 [2024-07-23 03:33:19.674366] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.150 [2024-07-23 03:33:19.674397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:53.150 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.408 [2024-07-23 03:33:19.734997] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.408 null0 00:33:53.408 [2024-07-23 03:33:19.850463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.408 [2024-07-23 03:33:19.874703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=589051 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 589051 /var/tmp/bperf.sock 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 589051 ']' 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:53.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.408 03:33:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.408 [2024-07-23 03:33:19.926214] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:53.408 [2024-07-23 03:33:19.926301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589051 ] 00:33:53.408 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.667 [2024-07-23 03:33:19.993889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.667 [2024-07-23 03:33:20.090808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.667 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.667 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:53.667 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:53.667 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:53.925 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:53.925 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.925 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.182 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.182 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.182 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.440 nvme0n1 00:33:54.440 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:54.440 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.440 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.440 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.440 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:54.440 03:33:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:54.440 Running I/O for 2 seconds... 00:33:54.440 [2024-07-23 03:33:20.999473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.440 [2024-07-23 03:33:20.999529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.440 [2024-07-23 03:33:20.999563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.698 [2024-07-23 03:33:21.017217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.698 [2024-07-23 03:33:21.017256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.698 [2024-07-23 03:33:21.017287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.698 [2024-07-23 03:33:21.039227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.698 [2024-07-23 03:33:21.039267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.698 [2024-07-23 03:33:21.039296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.698 [2024-07-23 03:33:21.063887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.698 [2024-07-23 03:33:21.063920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.698 [2024-07-23 03:33:21.063958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.698 [2024-07-23 03:33:21.088333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.698 [2024-07-23 03:33:21.088371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.698 [2024-07-23 03:33:21.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.698 [2024-07-23 03:33:21.108901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.698 [2024-07-23 03:33:21.108940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.698 [2024-07-23 03:33:21.108970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.698 [2024-07-23 03:33:21.126640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.698 [2024-07-23 03:33:21.126687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.698 [2024-07-23 03:33:21.126713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.698 [2024-07-23 03:33:21.149901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.699 [2024-07-23 03:33:21.149952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.699 [2024-07-23 03:33:21.149984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.699 [2024-07-23 03:33:21.173945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.699 [2024-07-23 03:33:21.173997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.699 [2024-07-23 03:33:21.174028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.699 [2024-07-23 03:33:21.196500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.699 [2024-07-23 03:33:21.196537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.699 [2024-07-23 03:33:21.196575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.699 [2024-07-23 03:33:21.212525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.699 [2024-07-23 03:33:21.212564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.699 [2024-07-23 03:33:21.212594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.699 [2024-07-23 03:33:21.242634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.699 [2024-07-23 03:33:21.242681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.699 [2024-07-23 03:33:21.242707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.699 [2024-07-23 03:33:21.259303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.699 [2024-07-23 03:33:21.259340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.699 [2024-07-23 03:33:21.259371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.283854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.283885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.283927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.307420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.307458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.307488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.331931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.331961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.332001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.355322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.355360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.355389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.379788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.379820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.379845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.403339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.403381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.403413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.425593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.425636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.425677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.450207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.450247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.450278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.466263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.466300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.466330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.977 [2024-07-23 03:33:21.488704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.977 [2024-07-23 03:33:21.488735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.977 [2024-07-23 03:33:21.488760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.978 [2024-07-23 03:33:21.512541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.978 [2024-07-23 03:33:21.512578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.978 [2024-07-23 03:33:21.512609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.978 [2024-07-23 03:33:21.536781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:54.978 [2024-07-23 03:33:21.536814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.978 [2024-07-23 03:33:21.536841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.560295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.560333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.560363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.584972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.585008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.585039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.607684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.607714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.607739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.625040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.625078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.625109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.647083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.647121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.647152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.671117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.671154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.671184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.687702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.687732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.687758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.711167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.711205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.711236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.733178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.733215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.733245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.757290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.757327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.757358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.774461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.774499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.774536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.236 [2024-07-23 03:33:21.796166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.236 [2024-07-23 03:33:21.796203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.236 [2024-07-23 03:33:21.796233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.821105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.821143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.821174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.844248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.844285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.844316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.868166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.868204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.868234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.890972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.891009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.891040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.907792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.907822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.907846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.932359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.932396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.932426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.955768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.955800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.955824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.978217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.978255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.978285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:21.994782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:21.994812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:21.994837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:22.015841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:22.015876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:22.015920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.494 [2024-07-23 03:33:22.040193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.494 [2024-07-23 03:33:22.040230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.494 [2024-07-23 03:33:22.040260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.495 [2024-07-23 03:33:22.064008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.495 [2024-07-23 03:33:22.064045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.495 [2024-07-23 03:33:22.064077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.086847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.086880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.086919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.107839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.107872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.107912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.125558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.125595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.125655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.149605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.149650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.149699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.173600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.173661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.173687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.197596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.197663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.197691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.221304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.221342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.221372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.241694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.241726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.241751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.258157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.258196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.258228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.283729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.283761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.283786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.753 [2024-07-23 03:33:22.305662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:55.753 [2024-07-23 03:33:22.305692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-07-23 03:33:22.305717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.329459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.329496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.329526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.353622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.353679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.353705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.378082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.378120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.378151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.400870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.400901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.400939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.417735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.417765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.417790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.439858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.439904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.439935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.463696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.463726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.463750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.487452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.487490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.487521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.511478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.511514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.511545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.534564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.534601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.534641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.558581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.558627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.558671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.011 [2024-07-23 03:33:22.584235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.011 [2024-07-23 03:33:22.584272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.011 [2024-07-23 03:33:22.584302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.269 [2024-07-23 03:33:22.599391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.269 [2024-07-23 03:33:22.599428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.269 [2024-07-23 03:33:22.599457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.269 [2024-07-23 03:33:22.623155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.269 [2024-07-23 03:33:22.623193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.269 [2024-07-23 03:33:22.623223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.269 [2024-07-23 03:33:22.647935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.269 [2024-07-23 03:33:22.647972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.269 [2024-07-23 03:33:22.648002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.269 [2024-07-23 03:33:22.663141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.269 [2024-07-23 03:33:22.663179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.269 [2024-07-23 03:33:22.663209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.269 [2024-07-23 03:33:22.686248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.269 [2024-07-23 03:33:22.686285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.269 [2024-07-23 03:33:22.686315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.269 [2024-07-23 03:33:22.708461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.269 [2024-07-23 03:33:22.708498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.269 [2024-07-23 03:33:22.708529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.269 [2024-07-23 03:33:22.731512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.269 [2024-07-23 03:33:22.732565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.269 [2024-07-23 03:33:22.732610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.269 [2024-07-23 03:33:22.756334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.269 [2024-07-23 03:33:22.756371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.270 [2024-07-23 03:33:22.756402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.270 [2024-07-23 03:33:22.781077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.270 [2024-07-23 03:33:22.781116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.270 [2024-07-23 03:33:22.781146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.270 [2024-07-23 03:33:22.801695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.270 [2024-07-23 03:33:22.801725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.270 [2024-07-23 03:33:22.801749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.270 [2024-07-23 03:33:22.817908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.270 [2024-07-23 03:33:22.817946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.270 [2024-07-23 03:33:22.817976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.270 [2024-07-23 03:33:22.842337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.270 [2024-07-23 03:33:22.842375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.270 [2024-07-23 03:33:22.842405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.528 [2024-07-23 03:33:22.865352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.528 [2024-07-23 03:33:22.865390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.528 [2024-07-23 03:33:22.865421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.528 [2024-07-23 03:33:22.889595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.528 [2024-07-23 03:33:22.889640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.528 [2024-07-23 03:33:22.889686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.528 [2024-07-23 03:33:22.911989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.528 [2024-07-23 03:33:22.912019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.528 [2024-07-23 03:33:22.912044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.528 [2024-07-23 03:33:22.928756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.528 [2024-07-23 03:33:22.928788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.528 [2024-07-23 03:33:22.928814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.528 [2024-07-23 03:33:22.951514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.528 [2024-07-23 03:33:22.951551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.528 [2024-07-23 03:33:22.951582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.528 [2024-07-23 03:33:22.974810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1085360) 00:33:56.528 [2024-07-23 03:33:22.974840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.528 [2024-07-23 03:33:22.974865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.528 00:33:56.528 Latency(us) 00:33:56.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.528 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:56.528 nvme0n1 : 2.01 11468.83 44.80 0.00 0.00 11144.21 4296.25 34758.35 00:33:56.528 =================================================================================================================== 00:33:56.528 Total : 11468.83 44.80 0.00 0.00 11144.21 4296.25 34758.35 00:33:56.528 0 00:33:56.528 03:33:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:56.528 03:33:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:56.528 03:33:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:56.528 03:33:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:56.528 | .driver_specific 00:33:56.528 | .nvme_error 00:33:56.528 | .status_code 00:33:56.528 | .command_transient_transport_error' 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 90 > 0 )) 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 589051 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 589051 ']' 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 589051 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 589051 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 589051' 00:33:56.786 killing process with pid 589051 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 589051 00:33:56.786 Received shutdown signal, test time was about 2.000000 seconds 00:33:56.786 00:33:56.786 Latency(us) 00:33:56.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.786 =================================================================================================================== 00:33:56.786 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.786 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 589051 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=589564 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 589564 /var/tmp/bperf.sock 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 589564 ']' 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:57.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:57.043 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.043 [2024-07-23 03:33:23.535873] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:57.043 [2024-07-23 03:33:23.535956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589564 ] 00:33:57.043 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:57.043 Zero copy mechanism will not be used. 00:33:57.043 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.043 [2024-07-23 03:33:23.597157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.301 [2024-07-23 03:33:23.687192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.301 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:57.301 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:57.301 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.301 03:33:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.559 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:57.559 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.559 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.559 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.559 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.559 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.817 nvme0n1 00:33:57.817 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:57.817 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.817 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.817 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.817 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:57.817 03:33:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:58.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:58.075 Zero copy mechanism will not be used. 00:33:58.075 Running I/O for 2 seconds... 00:33:58.075 [2024-07-23 03:33:24.508810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.508865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.075 [2024-07-23 03:33:24.508913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.075 [2024-07-23 03:33:24.521033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.521071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.075 [2024-07-23 03:33:24.521102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.075 [2024-07-23 03:33:24.533023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.533061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.075 [2024-07-23 03:33:24.533091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.075 [2024-07-23 03:33:24.545013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.545061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.075 [2024-07-23 03:33:24.545092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.075 [2024-07-23 03:33:24.556975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.557023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.075 [2024-07-23 03:33:24.557051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.075 [2024-07-23 03:33:24.569077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.569114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.075 [2024-07-23 03:33:24.569145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.075 [2024-07-23 03:33:24.581003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.581039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.075 [2024-07-23 03:33:24.581081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.075 [2024-07-23 03:33:24.593017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.593053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.075 [2024-07-23 03:33:24.593084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.075 [2024-07-23 03:33:24.605224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.075 [2024-07-23 03:33:24.605261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.076 [2024-07-23 03:33:24.605292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.076 [2024-07-23 03:33:24.617202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.076 [2024-07-23 03:33:24.617239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.076 [2024-07-23 03:33:24.617269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.076 [2024-07-23 03:33:24.629332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.076 [2024-07-23 03:33:24.629369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.076 [2024-07-23 03:33:24.629400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.076 [2024-07-23 03:33:24.642155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.076 [2024-07-23 03:33:24.642191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.076 [2024-07-23 03:33:24.642221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.334 [2024-07-23 03:33:24.655009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.334 [2024-07-23 03:33:24.655046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.334 [2024-07-23 03:33:24.655077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.334 [2024-07-23 03:33:24.667520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.334 [2024-07-23 03:33:24.667557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.334 [2024-07-23 03:33:24.667587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.334 [2024-07-23 03:33:24.680110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.334 [2024-07-23 03:33:24.680147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.334 [2024-07-23 03:33:24.680177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.334 [2024-07-23 03:33:24.692131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.692173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.692204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.704034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.704070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.704100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.715978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.716015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.716045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.727959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.728007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.728038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.739941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.739990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.740020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.751953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.752004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.752034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.763909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.763939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.763980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.776020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.776056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.776086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.788606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.788665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.788699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.800703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.800735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.800761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.812683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.812715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.812741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.824623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.824658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.824701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.836513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.836548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.836579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.848621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.848657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.848699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.860648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.860681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.860709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.872594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.872638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.872682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.884519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.884554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.884583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.896568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.896610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.896652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.335 [2024-07-23 03:33:24.908623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.335 [2024-07-23 03:33:24.908672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.335 [2024-07-23 03:33:24.908700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:24.920576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:24.920620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:24.920666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:24.932633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:24.932680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:24.932707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:24.944706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:24.944753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:24.944778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:24.956977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:24.957021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:24.957047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:24.969819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:24.969866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:24.969908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:24.981994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:24.982030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:24.982060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:24.994020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:24.994055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:24.994085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.006056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.006093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.006123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.018109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.018145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.018176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.030092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.030128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.030158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.042148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.042185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.042214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.054086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.054122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.054151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.066003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.066039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.066069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.077940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.077986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.078016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.089933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.089979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.090010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.101979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.102015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.102053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.113920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.113969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.113999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.126040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.126075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.126105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.138656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.138689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.138717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.151047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.151082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.151112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.594 [2024-07-23 03:33:25.163000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.594 [2024-07-23 03:33:25.163036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.594 [2024-07-23 03:33:25.163068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.174895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.174927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.186907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.186955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.186981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.199006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.199042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.199072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.211001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.211045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.211076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.223040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.223076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.223107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.234994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.235030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.235060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.247320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.247356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.247386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.259454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.259490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.259520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.271419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.271455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.271486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.283320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.283355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.283385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.295262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.295298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.295329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.307175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.307211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.307247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.319144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.319180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.319211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.331052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.331087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.331117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.343026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.343062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.343091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.355033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.355069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.355099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.367055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.367091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.367120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.379021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.379057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.379087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.391344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.391381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.391410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.403498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.403533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.403564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.415458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.415500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.415531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.853 [2024-07-23 03:33:25.427579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:58.853 [2024-07-23 03:33:25.427623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-07-23 03:33:25.427668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.440108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.440145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.440175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.451982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.452019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.452048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.464139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.464175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.464205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.476168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.476204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.476235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.488277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.488314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.488345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.500501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.500537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.500567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.512452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.512489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.512520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.524941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.524978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.525008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.536956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.537005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.537036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.548971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.549008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.549038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.561112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.561150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.561180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.573032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.573068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.585082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.585119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.585150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.597055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.597091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.597121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.609007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.609044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.609073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.621115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.621151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.621188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.633206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.633242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.633272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.645282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.645318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.645348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.657318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.657354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.657383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.669330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.669366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.669396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.113 [2024-07-23 03:33:25.681217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.113 [2024-07-23 03:33:25.681252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.113 [2024-07-23 03:33:25.681282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.693027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.693064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.693095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.705142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.705178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.705208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.717070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.717107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.717137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.729142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.729184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.729215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.741144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.741179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.741208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.753040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.753077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.753106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.764989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.765025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.765055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.777177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.777213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.777242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.789131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.789167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.789197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.801034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.801070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.801100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.813115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.813151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.813180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.824999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.825034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.825071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.837109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.837144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.837175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.849004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.849040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.849071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.860970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.861006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.861037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.372 [2024-07-23 03:33:25.872941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.372 [2024-07-23 03:33:25.872986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.372 [2024-07-23 03:33:25.873012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.373 [2024-07-23 03:33:25.884993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.373 [2024-07-23 03:33:25.885029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.373 [2024-07-23 03:33:25.885058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.373 [2024-07-23 03:33:25.897052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.373 [2024-07-23 03:33:25.897088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.373 [2024-07-23 03:33:25.897117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.373 [2024-07-23 03:33:25.909200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.373 [2024-07-23 03:33:25.909235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.373 [2024-07-23 03:33:25.909265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.373 [2024-07-23 03:33:25.921045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.373 [2024-07-23 03:33:25.921080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.373 [2024-07-23 03:33:25.921110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.373 [2024-07-23 03:33:25.933006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.373 [2024-07-23 03:33:25.933047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.373 [2024-07-23 03:33:25.933078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.373 [2024-07-23 03:33:25.945001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.373 [2024-07-23 03:33:25.945031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.373 [2024-07-23 03:33:25.945076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.631 [2024-07-23 03:33:25.956918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.631 [2024-07-23 03:33:25.956963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.631 [2024-07-23 03:33:25.956989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.631 [2024-07-23 03:33:25.969029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.631 [2024-07-23 03:33:25.969065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.631 [2024-07-23 03:33:25.969095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.631 [2024-07-23 03:33:25.981080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.631 [2024-07-23 03:33:25.981116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.631 [2024-07-23 03:33:25.981146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.631 [2024-07-23 03:33:25.992991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.631 [2024-07-23 03:33:25.993026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.631 [2024-07-23 03:33:25.993056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.631 [2024-07-23 03:33:26.005025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.631 [2024-07-23 03:33:26.005061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.631 [2024-07-23 03:33:26.005091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.631 [2024-07-23 03:33:26.016949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.016998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.017028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.028926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.028962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.028993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.041007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.041043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.041071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.052957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.053006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.053037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.065053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.065090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.065119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.077010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.077046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.077075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.089086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.089121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.089151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.101087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.101123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.101153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.113004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.113039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.113069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.125735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.125767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.125793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.137643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.137691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.137724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.149799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.149845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.149871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.162281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.162317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.162347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.174376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.174411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.174442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.186268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.186303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.186333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.632 [2024-07-23 03:33:26.198102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.632 [2024-07-23 03:33:26.198137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.632 [2024-07-23 03:33:26.198168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.890 [2024-07-23 03:33:26.209955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.890 [2024-07-23 03:33:26.210005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.210035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.221984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.222020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.222050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.234058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.234093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.234123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.245969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.246005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.246034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.257947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.257992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.258017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.270035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.270071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.270101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.282046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.282082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.282112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.293918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.293967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.293997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.306019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.306055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.306085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.318016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.318051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.318080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.330010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.330046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.330076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.341955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.342005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.342043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.353961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.354011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.354042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.366001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.366037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.366068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.377997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.378032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.378063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.389972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.390019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.390049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.401906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.401949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.401974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.413869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.413901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.413945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.425740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.425771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.425798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.437802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.437833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.437860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.449781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.449817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.449844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.891 [2024-07-23 03:33:26.461679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:33:59.891 [2024-07-23 03:33:26.461709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.891 [2024-07-23 03:33:26.461736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.150 [2024-07-23 03:33:26.473628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:34:00.150 [2024-07-23 03:33:26.473675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.150 [2024-07-23 03:33:26.473702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.150 [2024-07-23 03:33:26.485510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:34:00.150 [2024-07-23 03:33:26.485545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.150 [2024-07-23 03:33:26.485575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.150 [2024-07-23 03:33:26.497309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1786d50) 00:34:00.150 [2024-07-23 03:33:26.497345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.150 [2024-07-23 03:33:26.497375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.150 00:34:00.150 Latency(us) 00:34:00.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.150 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:00.150 nvme0n1 : 2.00 2569.06 321.13 0.00 0.00 6221.59 5752.60 13010.11 00:34:00.150 =================================================================================================================== 00:34:00.150 Total : 2569.06 321.13 0.00 0.00 6221.59 5752.60 13010.11 00:34:00.150 0 00:34:00.150 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:00.150 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:00.150 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:00.150 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:00.150 | .driver_specific 00:34:00.150 | .nvme_error 00:34:00.150 | .status_code 00:34:00.150 | .command_transient_transport_error' 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 589564 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 589564 ']' 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 589564 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 589564 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 589564' 00:34:00.409 killing process with pid 589564 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 589564 00:34:00.409 Received shutdown signal, test time was about 2.000000 seconds 00:34:00.409 00:34:00.409 Latency(us) 00:34:00.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.409 =================================================================================================================== 00:34:00.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.409 03:33:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 589564 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=589968 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 589968 /var/tmp/bperf.sock 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 589968 ']' 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:00.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:00.668 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.668 [2024-07-23 03:33:27.061695] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:00.668 [2024-07-23 03:33:27.061767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589968 ] 00:34:00.668 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.668 [2024-07-23 03:33:27.123854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.668 [2024-07-23 03:33:27.214035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.926 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:00.926 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:00.926 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:00.926 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:01.184 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:01.184 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.184 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.184 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.184 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:01.184 03:33:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:01.751 nvme0n1 00:34:01.751 03:33:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:01.751 03:33:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.751 03:33:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.751 03:33:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.751 03:33:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:01.751 03:33:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:01.751 Running I/O for 2 seconds... 00:34:01.751 [2024-07-23 03:33:28.164709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.165072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.165117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.179318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.179626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.179673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.193878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.194195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.194232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.208282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.208581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.208624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.222787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.223083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.223117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.237096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.237396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.237430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.251448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.251745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.251774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.265777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.266075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.266108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.280066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.280360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.280394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.294393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.294695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.294723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.308688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.308982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.309016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:01.751 [2024-07-23 03:33:28.323063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:01.751 [2024-07-23 03:33:28.323356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.751 [2024-07-23 03:33:28.323390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.337344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.337642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.337686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.351601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.351978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.352011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.366024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.366313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.366347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.380368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.380662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.380695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.394630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.394944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.394972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.408960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.409250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.409284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.423170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.423449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.423482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.437555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.437855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.437884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.451849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.452146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.452179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.466094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.466359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.466392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.480349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.480638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.480687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.494639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.495088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.495121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.508966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.509269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.509302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.523315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.523609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.523665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.537838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.538136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.538170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.552107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.552401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.552434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.566387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.566688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.566716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.010 [2024-07-23 03:33:28.580713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.010 [2024-07-23 03:33:28.581017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.010 [2024-07-23 03:33:28.581050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.269 [2024-07-23 03:33:28.595143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.269 [2024-07-23 03:33:28.595436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.269 [2024-07-23 03:33:28.595470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.269 [2024-07-23 03:33:28.609400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.269 [2024-07-23 03:33:28.609709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.269 [2024-07-23 03:33:28.609738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.269 [2024-07-23 03:33:28.623733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.269 [2024-07-23 03:33:28.624047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.269 [2024-07-23 03:33:28.624080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.269 [2024-07-23 03:33:28.638033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.269 [2024-07-23 03:33:28.638327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.269 [2024-07-23 03:33:28.638360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.269 [2024-07-23 03:33:28.652342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.269 [2024-07-23 03:33:28.652638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.269 [2024-07-23 03:33:28.652682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.269 [2024-07-23 03:33:28.666559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.269 [2024-07-23 03:33:28.666906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.269 [2024-07-23 03:33:28.666939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.269 [2024-07-23 03:33:28.680815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.269 [2024-07-23 03:33:28.681121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.269 [2024-07-23 03:33:28.681150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.269 [2024-07-23 03:33:28.695056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.269 [2024-07-23 03:33:28.695347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.695380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.709321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.709626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.709659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.723678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.723980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.724026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.737932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.738225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.738258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.752227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.752518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.752551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.766586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.766971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.767005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.780883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.781192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.781225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.795267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.795563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.795596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.809549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.809909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.809956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.823867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.824176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.824210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.270 [2024-07-23 03:33:28.838112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.270 [2024-07-23 03:33:28.838408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.270 [2024-07-23 03:33:28.838441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.852385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.528 [2024-07-23 03:33:28.852693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.528 [2024-07-23 03:33:28.852722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.866756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.528 [2024-07-23 03:33:28.867024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.528 [2024-07-23 03:33:28.867057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.881074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.528 [2024-07-23 03:33:28.881363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.528 [2024-07-23 03:33:28.881397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.895451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.528 [2024-07-23 03:33:28.895752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.528 [2024-07-23 03:33:28.895780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.909846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.528 [2024-07-23 03:33:28.910148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.528 [2024-07-23 03:33:28.910182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.924164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.528 [2024-07-23 03:33:28.924426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.528 [2024-07-23 03:33:28.924459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.938493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.528 [2024-07-23 03:33:28.938830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.528 [2024-07-23 03:33:28.938860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.952747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.528 [2024-07-23 03:33:28.953033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.528 [2024-07-23 03:33:28.953067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.528 [2024-07-23 03:33:28.967014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:28.967309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:28.967342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:28.981269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:28.981560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:28.981599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:28.995474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:28.995787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:28.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:29.009763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:29.010070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:29.010103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:29.024087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:29.024378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:29.024411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:29.038319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:29.038591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:29.038633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:29.052645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:29.052947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:29.052981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:29.066999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:29.067306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:29.067339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:29.081358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:29.081665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:29.081694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.529 [2024-07-23 03:33:29.095643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.529 [2024-07-23 03:33:29.095945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.529 [2024-07-23 03:33:29.095979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.110056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.110352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.110385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.124455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.124762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.124792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.138817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.139119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.139153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.153108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.153379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.153412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.167510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.167806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.167838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.181855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.182161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.182195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.196084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.196378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.196411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.210375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.210680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.210709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.224594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.224987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.225020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.238955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.239246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.239278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.253212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.787 [2024-07-23 03:33:29.253504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.787 [2024-07-23 03:33:29.253538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.787 [2024-07-23 03:33:29.267463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.788 [2024-07-23 03:33:29.267777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.788 [2024-07-23 03:33:29.267806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.788 [2024-07-23 03:33:29.281715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.788 [2024-07-23 03:33:29.282008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.788 [2024-07-23 03:33:29.282041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.788 [2024-07-23 03:33:29.295924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.788 [2024-07-23 03:33:29.296218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.788 [2024-07-23 03:33:29.296251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.788 [2024-07-23 03:33:29.310190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.788 [2024-07-23 03:33:29.310484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.788 [2024-07-23 03:33:29.310517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.788 [2024-07-23 03:33:29.324430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.788 [2024-07-23 03:33:29.324736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.788 [2024-07-23 03:33:29.324765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.788 [2024-07-23 03:33:29.338712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.788 [2024-07-23 03:33:29.339013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.788 [2024-07-23 03:33:29.339046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.788 [2024-07-23 03:33:29.352928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:02.788 [2024-07-23 03:33:29.353209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.788 [2024-07-23 03:33:29.353248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.367221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.367513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.367547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.381465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.381761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.381789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.395697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.395998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.396031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.409953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.410255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.410288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.424253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.424546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.424580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.438528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.438845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.438873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.452787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.453074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.453102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.466964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.467259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.467293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.481250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.481552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.481585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.495519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.495807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.495835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.509749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.510054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.510087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.523985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.524278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.524311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.538249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.538542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.538575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.046 [2024-07-23 03:33:29.552625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.046 [2024-07-23 03:33:29.552947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.046 [2024-07-23 03:33:29.552994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.047 [2024-07-23 03:33:29.566831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.047 [2024-07-23 03:33:29.567128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.047 [2024-07-23 03:33:29.567161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.047 [2024-07-23 03:33:29.581058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.047 [2024-07-23 03:33:29.581351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.047 [2024-07-23 03:33:29.581384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.047 [2024-07-23 03:33:29.595408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.047 [2024-07-23 03:33:29.595702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.047 [2024-07-23 03:33:29.595730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.047 [2024-07-23 03:33:29.609689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.047 [2024-07-23 03:33:29.610005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.047 [2024-07-23 03:33:29.610038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.305 [2024-07-23 03:33:29.623940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.305 [2024-07-23 03:33:29.624235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.305 [2024-07-23 03:33:29.624268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.305 [2024-07-23 03:33:29.638197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.305 [2024-07-23 03:33:29.638488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.305 [2024-07-23 03:33:29.638521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.305 [2024-07-23 03:33:29.652369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.305 [2024-07-23 03:33:29.652646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.305 [2024-07-23 03:33:29.652693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.305 [2024-07-23 03:33:29.666694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.305 [2024-07-23 03:33:29.667019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.305 [2024-07-23 03:33:29.667053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.305 [2024-07-23 03:33:29.680856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.305 [2024-07-23 03:33:29.681159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.305 [2024-07-23 03:33:29.681192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.305 [2024-07-23 03:33:29.695236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.695528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.695562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.709436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.709738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.709768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.723758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.724049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.724082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.737936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.738222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.738255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.752190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.752451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.752484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.766443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.766734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.766766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.780749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.781047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.781081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.795023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.795314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.795347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.809278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.809568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.809600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.823504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.823799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.823827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.837664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.837972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.838005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.851983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.852276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.852314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.866261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.866552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.866584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.306 [2024-07-23 03:33:29.880478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.306 [2024-07-23 03:33:29.880786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.306 [2024-07-23 03:33:29.880816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.564 [2024-07-23 03:33:29.894745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:29.895064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:29.895097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:29.909010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:29.909304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:29.909337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:29.923248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:29.923544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:29.923577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:29.937437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:29.937736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:29.937764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:29.951757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:29.952045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:29.952078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:29.966002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:29.966297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:29.966330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:29.980173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:29.980447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:29.980480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:29.994523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:29.994835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:29.994864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.009359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.009685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.009719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.023947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.024265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.024300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.038415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.038726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.038760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.052779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.053097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.053128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.066086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.066386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.066415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.080460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.080770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.080801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.095004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.095298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.095332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.109550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.109829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.109860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.124022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.124318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.124351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.565 [2024-07-23 03:33:30.138413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.565 [2024-07-23 03:33:30.138717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.565 [2024-07-23 03:33:30.138748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.823 [2024-07-23 03:33:30.152774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55bc0) with pdu=0x2000190fe2e8 00:34:03.823 [2024-07-23 03:33:30.153044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.823 [2024-07-23 03:33:30.153078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.823 00:34:03.823 Latency(us) 00:34:03.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.823 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:03.823 nvme0n1 : 2.01 17787.10 69.48 0.00 0.00 7177.49 6359.42 16408.27 00:34:03.823 =================================================================================================================== 00:34:03.823 Total : 17787.10 69.48 0.00 0.00 7177.49 6359.42 16408.27 00:34:03.823 0 00:34:03.823 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:03.823 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:03.823 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:03.823 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:03.823 | .driver_specific 00:34:03.823 | .nvme_error 00:34:03.823 | .status_code 00:34:03.823 | .command_transient_transport_error' 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 589968 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 589968 ']' 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 589968 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 589968 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 589968' 00:34:04.082 killing process with pid 589968 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 589968 00:34:04.082 Received shutdown signal, test time was about 2.000000 seconds 00:34:04.082 00:34:04.082 Latency(us) 00:34:04.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.082 =================================================================================================================== 00:34:04.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 589968 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=590378 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 590378 /var/tmp/bperf.sock 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 590378 ']' 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:04.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:04.082 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:04.340 [2024-07-23 03:33:30.690684] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:04.340 [2024-07-23 03:33:30.690760] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590378 ] 00:34:04.340 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:04.340 Zero copy mechanism will not be used. 00:34:04.340 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.340 [2024-07-23 03:33:30.748483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.340 [2024-07-23 03:33:30.834040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.598 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:04.598 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:34:04.598 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:04.598 03:33:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:04.598 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:04.598 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.598 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:04.856 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.856 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:04.856 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:05.117 nvme0n1 00:34:05.117 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:05.117 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.117 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.117 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.117 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:05.117 03:33:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:05.117 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:05.117 Zero copy mechanism will not be used. 00:34:05.117 Running I/O for 2 seconds... 00:34:05.117 [2024-07-23 03:33:31.670225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.117 [2024-07-23 03:33:31.670623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.117 [2024-07-23 03:33:31.670693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.117 [2024-07-23 03:33:31.686239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.117 [2024-07-23 03:33:31.686642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.117 [2024-07-23 03:33:31.686694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.704528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.704947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.704984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.721743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.722230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.722266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.737808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.738357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.738392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.754137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.754480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.754528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.770989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.771336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.771367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.788207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.788564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.788594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.805812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.806228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.806258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.822841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.823217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.823262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.839559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.839968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.840017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.856441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.856983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.857013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.873377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.873817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.873859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.890016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.890378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.890406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.907658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.908116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.908145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.925835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.926249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.926292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.943273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.943723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.943768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.406 [2024-07-23 03:33:31.959980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.406 [2024-07-23 03:33:31.960371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.406 [2024-07-23 03:33:31.960418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:31.976504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:31.976971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:31.977002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:31.994315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:31.994697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:31.994741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.011445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.011800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.011831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.028635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.029199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.029228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.044969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.045401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.045430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.064497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.064877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.064923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.082028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.082475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.082503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.099993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.100312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.100341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.117170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.117698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.117728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.134029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.134387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.134431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.151412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.151857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.151901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.168297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.168651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.168683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.184869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.185257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.185300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.200032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.200445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.200477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.216508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.216965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.216993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.665 [2024-07-23 03:33:32.233852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.665 [2024-07-23 03:33:32.234211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.665 [2024-07-23 03:33:32.234239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.923 [2024-07-23 03:33:32.252813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.253172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.253199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.269788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.270190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.270218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.284541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.284931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.284975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.301136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.301496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.301524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.316170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.316521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.316549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.331854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.332229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.332257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.347265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.347645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.347675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.363414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.364000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.364046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.380497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.380866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.380896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.397117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.397522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.397566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.415185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.415748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.415778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.433226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.433578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.433631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.450976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.451380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.451424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.466950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.467315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.467343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.924 [2024-07-23 03:33:32.483219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:05.924 [2024-07-23 03:33:32.483702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.924 [2024-07-23 03:33:32.483750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.182 [2024-07-23 03:33:32.500537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.182 [2024-07-23 03:33:32.500826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.182 [2024-07-23 03:33:32.500856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.182 [2024-07-23 03:33:32.517611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.182 [2024-07-23 03:33:32.518003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.182 [2024-07-23 03:33:32.518047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.182 [2024-07-23 03:33:32.533863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.182 [2024-07-23 03:33:32.534226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.182 [2024-07-23 03:33:32.534255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.182 [2024-07-23 03:33:32.550009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.182 [2024-07-23 03:33:32.550440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.182 [2024-07-23 03:33:32.550467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.182 [2024-07-23 03:33:32.566869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.182 [2024-07-23 03:33:32.567242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.182 [2024-07-23 03:33:32.567269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.584105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.584502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.584550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.601068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.601331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.601360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.617474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.617861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.617905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.634381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.634772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.634801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.652257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.652709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.652738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.670019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.670465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.670492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.687239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.687649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.687677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.704990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.705368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.705414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.722795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.723139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.723167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.739568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.739924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.739953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.183 [2024-07-23 03:33:32.756644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.183 [2024-07-23 03:33:32.757082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.183 [2024-07-23 03:33:32.757127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.774670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.775073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.775124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.791383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.791785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.791826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.808565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.809013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.809056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.826184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.826536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.826565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.841286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.841791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.841833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.856758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.857107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.857136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.874388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.874760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.874804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.892439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.892875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.892919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.909925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.441 [2024-07-23 03:33:32.910366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.441 [2024-07-23 03:33:32.910395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.441 [2024-07-23 03:33:32.928094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.442 [2024-07-23 03:33:32.928456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.442 [2024-07-23 03:33:32.928515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.442 [2024-07-23 03:33:32.945747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.442 [2024-07-23 03:33:32.946126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.442 [2024-07-23 03:33:32.946170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.442 [2024-07-23 03:33:32.963720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.442 [2024-07-23 03:33:32.964093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.442 [2024-07-23 03:33:32.964122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.442 [2024-07-23 03:33:32.981026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.442 [2024-07-23 03:33:32.981494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.442 [2024-07-23 03:33:32.981540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.442 [2024-07-23 03:33:32.998068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.442 [2024-07-23 03:33:32.998457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.442 [2024-07-23 03:33:32.998484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.442 [2024-07-23 03:33:33.013892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.442 [2024-07-23 03:33:33.014259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.442 [2024-07-23 03:33:33.014288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.030327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.030677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.030706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.048544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.048949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.048979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.065462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.065954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.065981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.082085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.082345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.082401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.099182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.099584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.099645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.117452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.117810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.117838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.134458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.134840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.134886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.151051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.151482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.151510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.168819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.169218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.169252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.700 [2024-07-23 03:33:33.186143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.700 [2024-07-23 03:33:33.186501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.700 [2024-07-23 03:33:33.186529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.701 [2024-07-23 03:33:33.203758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.701 [2024-07-23 03:33:33.204190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.701 [2024-07-23 03:33:33.204217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.701 [2024-07-23 03:33:33.219888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.701 [2024-07-23 03:33:33.220258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.701 [2024-07-23 03:33:33.220287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.701 [2024-07-23 03:33:33.236285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.701 [2024-07-23 03:33:33.236659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.701 [2024-07-23 03:33:33.236702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.701 [2024-07-23 03:33:33.254518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.701 [2024-07-23 03:33:33.254958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.701 [2024-07-23 03:33:33.255002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.701 [2024-07-23 03:33:33.270978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.701 [2024-07-23 03:33:33.271335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.701 [2024-07-23 03:33:33.271364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.958 [2024-07-23 03:33:33.288363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.958 [2024-07-23 03:33:33.288751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.958 [2024-07-23 03:33:33.288795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.958 [2024-07-23 03:33:33.306704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.958 [2024-07-23 03:33:33.307204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.958 [2024-07-23 03:33:33.307232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.958 [2024-07-23 03:33:33.324538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.958 [2024-07-23 03:33:33.325006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.958 [2024-07-23 03:33:33.325052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.958 [2024-07-23 03:33:33.343035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.958 [2024-07-23 03:33:33.343467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.958 [2024-07-23 03:33:33.343496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.958 [2024-07-23 03:33:33.360106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.958 [2024-07-23 03:33:33.360534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.958 [2024-07-23 03:33:33.360562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.958 [2024-07-23 03:33:33.377035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.958 [2024-07-23 03:33:33.377411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.377445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.394574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.394988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.395018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.411583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.411991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.412035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.430500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.430915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.430944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.448163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.448590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.448632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.464536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.464902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.464931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.481418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.481887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.481917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.498315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.498735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.498780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.515838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.516320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.516349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.959 [2024-07-23 03:33:33.533336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:06.959 [2024-07-23 03:33:33.533829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.959 [2024-07-23 03:33:33.533883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.216 [2024-07-23 03:33:33.551665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:07.216 [2024-07-23 03:33:33.552073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-23 03:33:33.552131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.216 [2024-07-23 03:33:33.568482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:07.216 [2024-07-23 03:33:33.568704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-23 03:33:33.568734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.216 [2024-07-23 03:33:33.585648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:07.216 [2024-07-23 03:33:33.586209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-23 03:33:33.586237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.216 [2024-07-23 03:33:33.604086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:07.216 [2024-07-23 03:33:33.604692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-23 03:33:33.604734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.216 [2024-07-23 03:33:33.622893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:07.216 [2024-07-23 03:33:33.623291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-23 03:33:33.623319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.216 [2024-07-23 03:33:33.640254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:07.216 [2024-07-23 03:33:33.640632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-23 03:33:33.640659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.216 [2024-07-23 03:33:33.657583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd55e90) with pdu=0x2000190fef90 00:34:07.216 [2024-07-23 03:33:33.658023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.216 [2024-07-23 03:33:33.658049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.216 00:34:07.216 Latency(us) 00:34:07.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.216 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:07.216 nvme0n1 : 2.01 1810.84 226.35 0.00 0.00 8812.14 6456.51 19418.07 00:34:07.216 =================================================================================================================== 00:34:07.216 Total : 1810.84 226.35 0.00 0.00 8812.14 6456.51 19418.07 00:34:07.216 0 00:34:07.216 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:07.216 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:07.216 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:07.216 | .driver_specific 00:34:07.216 | .nvme_error 00:34:07.216 | .status_code 00:34:07.216 | .command_transient_transport_error' 00:34:07.216 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 590378 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 590378 ']' 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 590378 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 590378 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 590378' 00:34:07.474 killing process with pid 590378 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 590378 00:34:07.474 Received shutdown signal, test time was about 2.000000 seconds 00:34:07.474 00:34:07.474 Latency(us) 00:34:07.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.474 =================================================================================================================== 00:34:07.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:07.474 03:33:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 590378 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 589018 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 589018 ']' 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 589018 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 589018 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 589018' 00:34:07.732 killing process with pid 589018 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 589018 00:34:07.732 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 589018 00:34:07.991 00:34:07.991 real 0m14.971s 00:34:07.991 user 0m29.066s 00:34:07.991 sys 0m4.168s 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.991 ************************************ 00:34:07.991 END TEST nvmf_digest_error 00:34:07.991 ************************************ 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:07.991 rmmod nvme_tcp 00:34:07.991 rmmod nvme_fabrics 00:34:07.991 rmmod nvme_keyring 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 589018 ']' 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 589018 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 589018 ']' 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 589018 00:34:07.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (589018) - No such process 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 589018 is not found' 00:34:07.991 Process with pid 589018 is not found 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:07.991 03:33:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.533 03:33:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:10.533 00:34:10.533 real 0m34.627s 00:34:10.533 user 1m0.466s 00:34:10.533 sys 0m9.677s 00:34:10.533 03:33:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:10.533 03:33:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.533 ************************************ 00:34:10.533 END TEST nvmf_digest 00:34:10.533 ************************************ 00:34:10.533 03:33:36 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:10.533 03:33:36 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:10.533 03:33:36 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:10.533 03:33:36 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:10.533 03:33:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:10.533 03:33:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:10.533 03:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:10.533 ************************************ 00:34:10.533 START TEST nvmf_bdevperf 00:34:10.533 ************************************ 00:34:10.533 03:33:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:10.533 * Looking for test storage... 00:34:10.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:10.533 03:33:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.533 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:10.533 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.533 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.533 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.533 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:10.534 03:33:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:12.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:12.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:12.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:12.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:12.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:34:12.433 00:34:12.433 --- 10.0.0.2 ping statistics --- 00:34:12.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.433 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:34:12.433 00:34:12.433 --- 10.0.0.1 ping statistics --- 00:34:12.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.433 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:12.433 03:33:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=592727 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 592727 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 592727 ']' 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:12.434 03:33:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.434 [2024-07-23 03:33:38.850574] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:12.434 [2024-07-23 03:33:38.850671] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.434 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.434 [2024-07-23 03:33:38.917140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:12.434 [2024-07-23 03:33:39.001948] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.434 [2024-07-23 03:33:39.002003] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.434 [2024-07-23 03:33:39.002032] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.434 [2024-07-23 03:33:39.002043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.434 [2024-07-23 03:33:39.002053] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.434 [2024-07-23 03:33:39.002112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:12.434 [2024-07-23 03:33:39.002428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:12.434 [2024-07-23 03:33:39.002433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.691 [2024-07-23 03:33:39.131842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.691 Malloc0 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.691 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.692 [2024-07-23 03:33:39.193102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:12.692 { 00:34:12.692 "params": { 00:34:12.692 "name": "Nvme$subsystem", 00:34:12.692 "trtype": "$TEST_TRANSPORT", 00:34:12.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.692 "adrfam": "ipv4", 00:34:12.692 "trsvcid": "$NVMF_PORT", 00:34:12.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.692 "hdgst": ${hdgst:-false}, 00:34:12.692 "ddgst": ${ddgst:-false} 00:34:12.692 }, 00:34:12.692 "method": "bdev_nvme_attach_controller" 00:34:12.692 } 00:34:12.692 EOF 00:34:12.692 )") 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:12.692 03:33:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:12.692 "params": { 00:34:12.692 "name": "Nvme1", 00:34:12.692 "trtype": "tcp", 00:34:12.692 "traddr": "10.0.0.2", 00:34:12.692 "adrfam": "ipv4", 00:34:12.692 "trsvcid": "4420", 00:34:12.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:12.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:12.692 "hdgst": false, 00:34:12.692 "ddgst": false 00:34:12.692 }, 00:34:12.692 "method": "bdev_nvme_attach_controller" 00:34:12.692 }' 00:34:12.692 [2024-07-23 03:33:39.239333] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:12.692 [2024-07-23 03:33:39.239402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid592753 ] 00:34:12.949 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.949 [2024-07-23 03:33:39.302219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.949 [2024-07-23 03:33:39.389495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.208 Running I/O for 1 seconds... 00:34:14.141 00:34:14.141 Latency(us) 00:34:14.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:14.141 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:14.141 Verification LBA range: start 0x0 length 0x4000 00:34:14.141 Nvme1n1 : 1.01 8805.08 34.39 0.00 0.00 14476.42 2961.26 18155.90 00:34:14.141 =================================================================================================================== 00:34:14.141 Total : 8805.08 34.39 0.00 0.00 14476.42 2961.26 18155.90 00:34:14.398 03:33:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=593011 00:34:14.398 03:33:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:14.398 03:33:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:14.398 03:33:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:14.398 03:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:14.398 03:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:14.398 03:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:14.398 03:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:14.399 { 00:34:14.399 "params": { 00:34:14.399 "name": "Nvme$subsystem", 00:34:14.399 "trtype": "$TEST_TRANSPORT", 00:34:14.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.399 "adrfam": "ipv4", 00:34:14.399 "trsvcid": "$NVMF_PORT", 00:34:14.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.399 "hdgst": ${hdgst:-false}, 00:34:14.399 "ddgst": ${ddgst:-false} 00:34:14.399 }, 00:34:14.399 "method": "bdev_nvme_attach_controller" 00:34:14.399 } 00:34:14.399 EOF 00:34:14.399 )") 00:34:14.399 03:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:14.399 03:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:14.399 03:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:14.399 03:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:14.399 "params": { 00:34:14.399 "name": "Nvme1", 00:34:14.399 "trtype": "tcp", 00:34:14.399 "traddr": "10.0.0.2", 00:34:14.399 "adrfam": "ipv4", 00:34:14.399 "trsvcid": "4420", 00:34:14.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:14.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:14.399 "hdgst": false, 00:34:14.399 "ddgst": false 00:34:14.399 }, 00:34:14.399 "method": "bdev_nvme_attach_controller" 00:34:14.399 }' 00:34:14.399 [2024-07-23 03:33:40.893773] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:14.399 [2024-07-23 03:33:40.893859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593011 ] 00:34:14.399 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.399 [2024-07-23 03:33:40.954144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.656 [2024-07-23 03:33:41.039041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.914 Running I/O for 15 seconds... 00:34:17.443 03:33:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 592727 00:34:17.443 03:33:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:17.443 [2024-07-23 03:33:43.861567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.443 [2024-07-23 03:33:43.861637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.443 [2024-07-23 03:33:43.861703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.861736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.861767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.861797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.861827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.861867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.861918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.861949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.861985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.862000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.862017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.862032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.862049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.862074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.862091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.862106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.862123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.862138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.443 [2024-07-23 03:33:43.862155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.443 [2024-07-23 03:33:43.862170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.444 [2024-07-23 03:33:43.862498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.862976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.862991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.444 [2024-07-23 03:33:43.863054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.444 [2024-07-23 03:33:43.863473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.444 [2024-07-23 03:33:43.863488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.863959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.863987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.445 [2024-07-23 03:33:43.864819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.445 [2024-07-23 03:33:43.864832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.864848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.864861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.864876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.864889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.864926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.864941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.864958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.864978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.864994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.446 [2024-07-23 03:33:43.865009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.446 [2024-07-23 03:33:43.865041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.446 [2024-07-23 03:33:43.865072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.446 [2024-07-23 03:33:43.865103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.446 [2024-07-23 03:33:43.865135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.446 [2024-07-23 03:33:43.865166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:17.446 [2024-07-23 03:33:43.865197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.446 [2024-07-23 03:33:43.865959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.865975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103b9a0 is same with the state(5) to be set 00:34:17.446 [2024-07-23 03:33:43.865994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:17.446 [2024-07-23 03:33:43.866006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:17.446 [2024-07-23 03:33:43.866018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47040 len:8 PRP1 0x0 PRP2 0x0 00:34:17.446 [2024-07-23 03:33:43.866032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.866099] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x103b9a0 was disconnected and freed. reset controller. 00:34:17.446 [2024-07-23 03:33:43.866174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.446 [2024-07-23 03:33:43.866206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.866238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.446 [2024-07-23 03:33:43.866251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.446 [2024-07-23 03:33:43.866270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.446 [2024-07-23 03:33:43.866282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.447 [2024-07-23 03:33:43.866314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.447 [2024-07-23 03:33:43.866327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.447 [2024-07-23 03:33:43.866339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.870278] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.870321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.871076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.871108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.871126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.871367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.871611] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.871643] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.871677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.875302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.884552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.885009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.885037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.885053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.885318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.885572] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.885596] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.885622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.889268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.898430] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.898882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.898935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.898951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.899214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.899457] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.899480] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.899496] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.903107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.912471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.912925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.912957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.912975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.913214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.913458] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.913482] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.913497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.917092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.926433] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.926927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.926954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.926970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.927225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.927469] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.927492] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.927507] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.931107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.940465] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.940939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.940970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.940988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.941227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.941471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.941494] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.941509] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.945116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.954455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.955010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.955037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.955073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.955326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.955570] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.955593] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.955621] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.959214] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.968309] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.968737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.968768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.968786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.969025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.969269] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.969292] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.969307] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.972903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.982230] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.982680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.982711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.982729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.982968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.983211] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.447 [2024-07-23 03:33:43.983235] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.447 [2024-07-23 03:33:43.983250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.447 [2024-07-23 03:33:43.986849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.447 [2024-07-23 03:33:43.996180] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.447 [2024-07-23 03:33:43.996603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.447 [2024-07-23 03:33:43.996640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.447 [2024-07-23 03:33:43.996659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.447 [2024-07-23 03:33:43.996898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.447 [2024-07-23 03:33:43.997142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.448 [2024-07-23 03:33:43.997170] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.448 [2024-07-23 03:33:43.997187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.448 [2024-07-23 03:33:44.000811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.448 [2024-07-23 03:33:44.010147] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.448 [2024-07-23 03:33:44.010618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.448 [2024-07-23 03:33:44.010649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.448 [2024-07-23 03:33:44.010667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.448 [2024-07-23 03:33:44.010906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.448 [2024-07-23 03:33:44.011149] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.448 [2024-07-23 03:33:44.011172] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.448 [2024-07-23 03:33:44.011187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.448 [2024-07-23 03:33:44.014794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.706 [2024-07-23 03:33:44.024131] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.706 [2024-07-23 03:33:44.024582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.706 [2024-07-23 03:33:44.024621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.706 [2024-07-23 03:33:44.024641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.706 [2024-07-23 03:33:44.024881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.706 [2024-07-23 03:33:44.025124] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.706 [2024-07-23 03:33:44.025147] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.706 [2024-07-23 03:33:44.025162] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.706 [2024-07-23 03:33:44.028758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.706 [2024-07-23 03:33:44.038089] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.706 [2024-07-23 03:33:44.038514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.706 [2024-07-23 03:33:44.038545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.706 [2024-07-23 03:33:44.038563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.706 [2024-07-23 03:33:44.038815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.706 [2024-07-23 03:33:44.039058] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.706 [2024-07-23 03:33:44.039082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.706 [2024-07-23 03:33:44.039098] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.706 [2024-07-23 03:33:44.042694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.706 [2024-07-23 03:33:44.052032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.706 [2024-07-23 03:33:44.052465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.706 [2024-07-23 03:33:44.052495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.706 [2024-07-23 03:33:44.052513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.706 [2024-07-23 03:33:44.052763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.706 [2024-07-23 03:33:44.053007] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.706 [2024-07-23 03:33:44.053030] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.706 [2024-07-23 03:33:44.053046] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.706 [2024-07-23 03:33:44.056642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.706 [2024-07-23 03:33:44.065971] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.706 [2024-07-23 03:33:44.066420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.706 [2024-07-23 03:33:44.066451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.706 [2024-07-23 03:33:44.066469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.706 [2024-07-23 03:33:44.066736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.706 [2024-07-23 03:33:44.066982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.706 [2024-07-23 03:33:44.067005] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.706 [2024-07-23 03:33:44.067020] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.706 [2024-07-23 03:33:44.070610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.706 [2024-07-23 03:33:44.079945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.706 [2024-07-23 03:33:44.080374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.706 [2024-07-23 03:33:44.080405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.706 [2024-07-23 03:33:44.080423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.706 [2024-07-23 03:33:44.080673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.706 [2024-07-23 03:33:44.080917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.706 [2024-07-23 03:33:44.080941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.706 [2024-07-23 03:33:44.080956] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.706 [2024-07-23 03:33:44.084544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.706 [2024-07-23 03:33:44.093880] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.706 [2024-07-23 03:33:44.094332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.706 [2024-07-23 03:33:44.094363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.706 [2024-07-23 03:33:44.094380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.706 [2024-07-23 03:33:44.094637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.094883] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.094906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.094922] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.098511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.107849] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.108305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.108335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.108353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.108591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.108845] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.108868] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.108884] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.112471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.121811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.122264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.122294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.122312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.122551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.122804] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.122828] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.122843] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.126432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.135766] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.136380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.136434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.136452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.136701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.136945] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.136967] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.136988] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.140576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.149701] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.150159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.150190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.150207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.150447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.150702] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.150725] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.150740] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.154329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.163671] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.164122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.164154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.164171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.164411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.164666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.164690] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.164705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.168295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.177655] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.178097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.178128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.178145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.178384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.178640] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.178664] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.178679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.182272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.191618] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.192050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.192081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.192099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.192337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.192581] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.192604] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.192631] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.196223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.205597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.206229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.206284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.206302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.206540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.206796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.206820] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.206835] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.210426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.219552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.219962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.219993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.220011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.220250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.220494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.220517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.707 [2024-07-23 03:33:44.220532] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.707 [2024-07-23 03:33:44.224135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.707 [2024-07-23 03:33:44.233488] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.707 [2024-07-23 03:33:44.233925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.707 [2024-07-23 03:33:44.233955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.707 [2024-07-23 03:33:44.233973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.707 [2024-07-23 03:33:44.234212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.707 [2024-07-23 03:33:44.234461] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.707 [2024-07-23 03:33:44.234484] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.708 [2024-07-23 03:33:44.234500] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.708 [2024-07-23 03:33:44.238103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.708 [2024-07-23 03:33:44.247436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.708 [2024-07-23 03:33:44.247844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.708 [2024-07-23 03:33:44.247875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.708 [2024-07-23 03:33:44.247893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.708 [2024-07-23 03:33:44.248132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.708 [2024-07-23 03:33:44.248375] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.708 [2024-07-23 03:33:44.248398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.708 [2024-07-23 03:33:44.248413] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.708 [2024-07-23 03:33:44.252016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.708 [2024-07-23 03:33:44.261357] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.708 [2024-07-23 03:33:44.261823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.708 [2024-07-23 03:33:44.261854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.708 [2024-07-23 03:33:44.261872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.708 [2024-07-23 03:33:44.262111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.708 [2024-07-23 03:33:44.262354] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.708 [2024-07-23 03:33:44.262377] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.708 [2024-07-23 03:33:44.262392] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.708 [2024-07-23 03:33:44.266084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.708 [2024-07-23 03:33:44.275239] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.708 [2024-07-23 03:33:44.275700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.708 [2024-07-23 03:33:44.275731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.708 [2024-07-23 03:33:44.275749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.708 [2024-07-23 03:33:44.275988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.708 [2024-07-23 03:33:44.276232] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.708 [2024-07-23 03:33:44.276255] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.708 [2024-07-23 03:33:44.276270] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.708 [2024-07-23 03:33:44.279872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.967 [2024-07-23 03:33:44.289214] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.967 [2024-07-23 03:33:44.289647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.967 [2024-07-23 03:33:44.289680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.967 [2024-07-23 03:33:44.289698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.967 [2024-07-23 03:33:44.289937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.967 [2024-07-23 03:33:44.290181] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.967 [2024-07-23 03:33:44.290205] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.967 [2024-07-23 03:33:44.290220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.967 [2024-07-23 03:33:44.293823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.967 [2024-07-23 03:33:44.303167] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.967 [2024-07-23 03:33:44.303626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.967 [2024-07-23 03:33:44.303657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.967 [2024-07-23 03:33:44.303675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.967 [2024-07-23 03:33:44.303914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.967 [2024-07-23 03:33:44.304158] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.967 [2024-07-23 03:33:44.304181] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.967 [2024-07-23 03:33:44.304196] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.967 [2024-07-23 03:33:44.307800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.967 [2024-07-23 03:33:44.317138] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.967 [2024-07-23 03:33:44.317588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.967 [2024-07-23 03:33:44.317626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.967 [2024-07-23 03:33:44.317646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.967 [2024-07-23 03:33:44.317886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.967 [2024-07-23 03:33:44.318129] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.967 [2024-07-23 03:33:44.318152] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.967 [2024-07-23 03:33:44.318168] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.967 [2024-07-23 03:33:44.321772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.967 [2024-07-23 03:33:44.331137] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.967 [2024-07-23 03:33:44.331563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.967 [2024-07-23 03:33:44.331594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.967 [2024-07-23 03:33:44.331629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.967 [2024-07-23 03:33:44.331874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.332118] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.332141] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.332156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.335756] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.345094] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.345500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.345532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.345550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.345802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.346046] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.346069] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.346085] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.349682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.359021] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.359467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.359498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.359516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.359766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.360011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.360034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.360049] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.363651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.373002] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.373428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.373459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.373478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.373728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.373972] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.374001] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.374016] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.377619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.386957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.387386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.387417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.387434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.387686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.387930] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.387953] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.387968] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.391575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.400924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.401382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.401412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.401430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.401682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.401926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.401949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.401964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.405558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.414909] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.415355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.415386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.415403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.415654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.415898] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.415921] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.415936] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.419528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.428878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.429312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.429343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.429360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.429600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.429855] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.429879] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.429894] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.433485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.442848] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.443301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.443331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.443349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.443588] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.443842] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.443866] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.443881] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.447469] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.456815] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.457236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.457267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.457285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.457524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.457780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.968 [2024-07-23 03:33:44.457803] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.968 [2024-07-23 03:33:44.457819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.968 [2024-07-23 03:33:44.461411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.968 [2024-07-23 03:33:44.470756] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.968 [2024-07-23 03:33:44.471215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.968 [2024-07-23 03:33:44.471246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.968 [2024-07-23 03:33:44.471264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.968 [2024-07-23 03:33:44.471508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.968 [2024-07-23 03:33:44.471765] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.969 [2024-07-23 03:33:44.471789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.969 [2024-07-23 03:33:44.471804] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.969 [2024-07-23 03:33:44.475395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.969 [2024-07-23 03:33:44.484739] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.969 [2024-07-23 03:33:44.485188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.969 [2024-07-23 03:33:44.485219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.969 [2024-07-23 03:33:44.485237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.969 [2024-07-23 03:33:44.485475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.969 [2024-07-23 03:33:44.485732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.969 [2024-07-23 03:33:44.485756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.969 [2024-07-23 03:33:44.485771] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.969 [2024-07-23 03:33:44.489361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.969 [2024-07-23 03:33:44.498702] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.969 [2024-07-23 03:33:44.499132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.969 [2024-07-23 03:33:44.499163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.969 [2024-07-23 03:33:44.499181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.969 [2024-07-23 03:33:44.499420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.969 [2024-07-23 03:33:44.499677] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.969 [2024-07-23 03:33:44.499701] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.969 [2024-07-23 03:33:44.499716] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.969 [2024-07-23 03:33:44.503307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.969 [2024-07-23 03:33:44.512657] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.969 [2024-07-23 03:33:44.513081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.969 [2024-07-23 03:33:44.513112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.969 [2024-07-23 03:33:44.513130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.969 [2024-07-23 03:33:44.513368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.969 [2024-07-23 03:33:44.513612] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.969 [2024-07-23 03:33:44.513647] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.969 [2024-07-23 03:33:44.513668] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.969 [2024-07-23 03:33:44.517258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.969 [2024-07-23 03:33:44.526593] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.969 [2024-07-23 03:33:44.527035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.969 [2024-07-23 03:33:44.527066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.969 [2024-07-23 03:33:44.527084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.969 [2024-07-23 03:33:44.527322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.969 [2024-07-23 03:33:44.527566] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.969 [2024-07-23 03:33:44.527589] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.969 [2024-07-23 03:33:44.527604] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.969 [2024-07-23 03:33:44.531206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.969 [2024-07-23 03:33:44.540545] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.969 [2024-07-23 03:33:44.540984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.969 [2024-07-23 03:33:44.541015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:17.969 [2024-07-23 03:33:44.541032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:17.969 [2024-07-23 03:33:44.541271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:17.969 [2024-07-23 03:33:44.541514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.969 [2024-07-23 03:33:44.541538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.969 [2024-07-23 03:33:44.541553] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.228 [2024-07-23 03:33:44.545158] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.228 [2024-07-23 03:33:44.554497] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.228 [2024-07-23 03:33:44.554967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.228 [2024-07-23 03:33:44.554998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.555016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.555254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.555497] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.555520] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.555535] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.559138] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.568476] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.568890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.568920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.568938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.569177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.569420] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.569443] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.569458] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.573059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.582395] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.582938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.582969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.582987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.583227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.583471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.583494] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.583509] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.587111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.596446] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.596902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.596933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.596951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.597189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.597433] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.597456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.597471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.601072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.610416] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.610871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.610902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.610920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.611168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.611412] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.611435] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.611450] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.615054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.624399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.624874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.624905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.624923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.625162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.625406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.625429] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.625444] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.629039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.638371] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.638838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.638869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.638887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.639126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.639369] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.639392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.639407] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.643010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.652351] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.652821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.652852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.652869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.653108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.653352] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.653375] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.653390] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.656996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.666332] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.666957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.667011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.667028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.667267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.667510] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.667533] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.667548] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.671152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.680281] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.680733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.680765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.229 [2024-07-23 03:33:44.680783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.229 [2024-07-23 03:33:44.681022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.229 [2024-07-23 03:33:44.681265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.229 [2024-07-23 03:33:44.681288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.229 [2024-07-23 03:33:44.681303] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.229 [2024-07-23 03:33:44.684905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.229 [2024-07-23 03:33:44.694239] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.229 [2024-07-23 03:33:44.694663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.229 [2024-07-23 03:33:44.694694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.230 [2024-07-23 03:33:44.694712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.230 [2024-07-23 03:33:44.694951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.230 [2024-07-23 03:33:44.695195] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.230 [2024-07-23 03:33:44.695217] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.230 [2024-07-23 03:33:44.695233] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.230 [2024-07-23 03:33:44.698837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.230 [2024-07-23 03:33:44.708183] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.230 [2024-07-23 03:33:44.708606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.230 [2024-07-23 03:33:44.708648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.230 [2024-07-23 03:33:44.708667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.230 [2024-07-23 03:33:44.708907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.230 [2024-07-23 03:33:44.709150] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.230 [2024-07-23 03:33:44.709173] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.230 [2024-07-23 03:33:44.709189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.230 [2024-07-23 03:33:44.712814] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.230 [2024-07-23 03:33:44.722152] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.230 [2024-07-23 03:33:44.722585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.230 [2024-07-23 03:33:44.722624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.230 [2024-07-23 03:33:44.722644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.230 [2024-07-23 03:33:44.722883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.230 [2024-07-23 03:33:44.723126] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.230 [2024-07-23 03:33:44.723150] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.230 [2024-07-23 03:33:44.723165] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.230 [2024-07-23 03:33:44.726769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.230 [2024-07-23 03:33:44.736109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.230 [2024-07-23 03:33:44.736533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.230 [2024-07-23 03:33:44.736563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.230 [2024-07-23 03:33:44.736581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.230 [2024-07-23 03:33:44.736831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.230 [2024-07-23 03:33:44.737074] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.230 [2024-07-23 03:33:44.737097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.230 [2024-07-23 03:33:44.737113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.230 [2024-07-23 03:33:44.740706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.230 [2024-07-23 03:33:44.750041] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.230 [2024-07-23 03:33:44.750485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.230 [2024-07-23 03:33:44.750517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.230 [2024-07-23 03:33:44.750535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.230 [2024-07-23 03:33:44.750786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.230 [2024-07-23 03:33:44.751037] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.230 [2024-07-23 03:33:44.751060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.230 [2024-07-23 03:33:44.751075] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.230 [2024-07-23 03:33:44.754672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.230 [2024-07-23 03:33:44.764015] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.230 [2024-07-23 03:33:44.764442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.230 [2024-07-23 03:33:44.764473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.230 [2024-07-23 03:33:44.764491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.230 [2024-07-23 03:33:44.764742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.230 [2024-07-23 03:33:44.764986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.230 [2024-07-23 03:33:44.765010] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.230 [2024-07-23 03:33:44.765025] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.230 [2024-07-23 03:33:44.768622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.230 [2024-07-23 03:33:44.777958] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.230 [2024-07-23 03:33:44.778417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.230 [2024-07-23 03:33:44.778448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.230 [2024-07-23 03:33:44.778465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.230 [2024-07-23 03:33:44.778715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.230 [2024-07-23 03:33:44.778959] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.230 [2024-07-23 03:33:44.778982] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.230 [2024-07-23 03:33:44.778997] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.230 [2024-07-23 03:33:44.782587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.230 [2024-07-23 03:33:44.791927] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.230 [2024-07-23 03:33:44.792377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.230 [2024-07-23 03:33:44.792407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.230 [2024-07-23 03:33:44.792424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.230 [2024-07-23 03:33:44.792675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.230 [2024-07-23 03:33:44.792919] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.230 [2024-07-23 03:33:44.792942] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.230 [2024-07-23 03:33:44.792958] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.230 [2024-07-23 03:33:44.796546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.490 [2024-07-23 03:33:44.805903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.490 [2024-07-23 03:33:44.806431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.490 [2024-07-23 03:33:44.806461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.490 [2024-07-23 03:33:44.806478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.490 [2024-07-23 03:33:44.806730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.490 [2024-07-23 03:33:44.806974] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.490 [2024-07-23 03:33:44.806997] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.490 [2024-07-23 03:33:44.807012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.490 [2024-07-23 03:33:44.810609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.490 [2024-07-23 03:33:44.819953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.490 [2024-07-23 03:33:44.820381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.490 [2024-07-23 03:33:44.820412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.490 [2024-07-23 03:33:44.820429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.490 [2024-07-23 03:33:44.820682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.490 [2024-07-23 03:33:44.820926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.490 [2024-07-23 03:33:44.820949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.490 [2024-07-23 03:33:44.820964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.490 [2024-07-23 03:33:44.824553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.490 [2024-07-23 03:33:44.833900] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.490 [2024-07-23 03:33:44.834326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.490 [2024-07-23 03:33:44.834357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.490 [2024-07-23 03:33:44.834375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.490 [2024-07-23 03:33:44.834625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.490 [2024-07-23 03:33:44.834869] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.490 [2024-07-23 03:33:44.834893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.490 [2024-07-23 03:33:44.834908] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.490 [2024-07-23 03:33:44.838500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.490 [2024-07-23 03:33:44.847842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.490 [2024-07-23 03:33:44.848372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.490 [2024-07-23 03:33:44.848403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.490 [2024-07-23 03:33:44.848427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.490 [2024-07-23 03:33:44.848682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.490 [2024-07-23 03:33:44.848925] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.490 [2024-07-23 03:33:44.848949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.490 [2024-07-23 03:33:44.848964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.490 [2024-07-23 03:33:44.852552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.490 [2024-07-23 03:33:44.861894] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.490 [2024-07-23 03:33:44.862328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.490 [2024-07-23 03:33:44.862359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.490 [2024-07-23 03:33:44.862376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.862627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.862871] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.862894] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.862909] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.866497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.875874] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.876326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.876358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.876376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.876626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.876880] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.876903] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.876919] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.880512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.889956] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.890404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.890436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.890454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.890704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.890949] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.890973] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.890993] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.894591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.903962] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.904389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.904419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.904445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.904702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.904946] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.904970] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.904985] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.908580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.917975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.918426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.918457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.918474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.918725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.918969] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.918992] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.919008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.922605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.931986] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.932432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.932463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.932481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.932733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.932978] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.933001] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.933016] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.936626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.945984] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.946390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.946421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.946439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.946691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.946936] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.946959] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.946974] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.950573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.959946] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.960399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.960431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.960448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.960698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.960942] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.960966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.960981] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.964575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.973929] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.974392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.974423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.974440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.974691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.974936] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.974960] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.974975] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.978573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:44.987937] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:44.988386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:44.988417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:44.988434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.491 [2024-07-23 03:33:44.988691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.491 [2024-07-23 03:33:44.988935] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.491 [2024-07-23 03:33:44.988958] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.491 [2024-07-23 03:33:44.988973] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.491 [2024-07-23 03:33:44.992567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.491 [2024-07-23 03:33:45.001916] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.491 [2024-07-23 03:33:45.002474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.491 [2024-07-23 03:33:45.002540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.491 [2024-07-23 03:33:45.002558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.492 [2024-07-23 03:33:45.002805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.492 [2024-07-23 03:33:45.003049] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.492 [2024-07-23 03:33:45.003073] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.492 [2024-07-23 03:33:45.003088] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.492 [2024-07-23 03:33:45.006711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.492 [2024-07-23 03:33:45.015853] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.492 [2024-07-23 03:33:45.016446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.492 [2024-07-23 03:33:45.016478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.492 [2024-07-23 03:33:45.016496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.492 [2024-07-23 03:33:45.016746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.492 [2024-07-23 03:33:45.016991] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.492 [2024-07-23 03:33:45.017014] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.492 [2024-07-23 03:33:45.017030] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.492 [2024-07-23 03:33:45.020632] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.492 [2024-07-23 03:33:45.029776] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.492 [2024-07-23 03:33:45.030177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.492 [2024-07-23 03:33:45.030208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.492 [2024-07-23 03:33:45.030225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.492 [2024-07-23 03:33:45.030465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.492 [2024-07-23 03:33:45.030718] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.492 [2024-07-23 03:33:45.030743] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.492 [2024-07-23 03:33:45.030763] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.492 [2024-07-23 03:33:45.034353] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.492 [2024-07-23 03:33:45.043700] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.492 [2024-07-23 03:33:45.044133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.492 [2024-07-23 03:33:45.044164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.492 [2024-07-23 03:33:45.044181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.492 [2024-07-23 03:33:45.044420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.492 [2024-07-23 03:33:45.044674] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.492 [2024-07-23 03:33:45.044698] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.492 [2024-07-23 03:33:45.044714] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.492 [2024-07-23 03:33:45.048303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.492 [2024-07-23 03:33:45.057652] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.492 [2024-07-23 03:33:45.058104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.492 [2024-07-23 03:33:45.058135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.492 [2024-07-23 03:33:45.058153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.492 [2024-07-23 03:33:45.058391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.492 [2024-07-23 03:33:45.058644] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.492 [2024-07-23 03:33:45.058667] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.492 [2024-07-23 03:33:45.058682] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.492 [2024-07-23 03:33:45.062272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.071609] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.072068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.072098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.072116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.072355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.072598] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.072634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.752 [2024-07-23 03:33:45.072650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.752 [2024-07-23 03:33:45.076243] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.085605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.086076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.086112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.086131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.086370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.086624] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.086648] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.752 [2024-07-23 03:33:45.086663] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.752 [2024-07-23 03:33:45.090254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.099604] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.100064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.100094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.100112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.100350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.100593] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.100625] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.752 [2024-07-23 03:33:45.100643] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.752 [2024-07-23 03:33:45.104241] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.113588] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.114069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.114101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.114120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.114359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.114624] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.114655] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.752 [2024-07-23 03:33:45.114671] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.752 [2024-07-23 03:33:45.118265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.127625] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.128064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.128095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.128113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.128352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.128601] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.128635] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.752 [2024-07-23 03:33:45.128652] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.752 [2024-07-23 03:33:45.132242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.141583] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.142041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.142072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.142090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.142328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.142571] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.142595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.752 [2024-07-23 03:33:45.142610] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.752 [2024-07-23 03:33:45.146212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.155543] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.156015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.156045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.156063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.156301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.156544] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.156568] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.752 [2024-07-23 03:33:45.156582] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.752 [2024-07-23 03:33:45.160199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.169547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.170010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.170041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.170059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.170298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.170542] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.170565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.752 [2024-07-23 03:33:45.170580] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.752 [2024-07-23 03:33:45.174180] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.752 [2024-07-23 03:33:45.183525] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.752 [2024-07-23 03:33:45.183965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-07-23 03:33:45.183997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.752 [2024-07-23 03:33:45.184016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.752 [2024-07-23 03:33:45.184255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.752 [2024-07-23 03:33:45.184498] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.752 [2024-07-23 03:33:45.184521] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.184536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.188163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.197504] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.197921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.197953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.197971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.198210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.198453] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.198476] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.198491] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.202088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.211426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.211877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.211909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.211926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.212165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.212408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.212432] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.212447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.216046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.225398] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.225808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.225839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.225862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.226102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.226345] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.226368] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.226384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.229982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.239310] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.239765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.239796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.239814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.240053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.240296] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.240319] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.240334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.243932] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.253260] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.253720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.253752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.253770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.254009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.254253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.254276] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.254291] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.257887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.267225] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.267671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.267703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.267721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.267960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.268204] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.268233] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.268249] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.271854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.281199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.281655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.281686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.281705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.281944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.282187] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.282211] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.282226] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.285826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.295162] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.295621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.295653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.295671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.295910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.296153] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.296176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.296191] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.299786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.309128] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.309562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.309593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.309610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.309862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.310105] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.310128] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.310143] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.753 [2024-07-23 03:33:45.313738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.753 [2024-07-23 03:33:45.323081] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.753 [2024-07-23 03:33:45.323521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-07-23 03:33:45.323553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:18.753 [2024-07-23 03:33:45.323570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:18.753 [2024-07-23 03:33:45.323820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:18.753 [2024-07-23 03:33:45.324065] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.753 [2024-07-23 03:33:45.324088] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.753 [2024-07-23 03:33:45.324103] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.014 [2024-07-23 03:33:45.327702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.014 [2024-07-23 03:33:45.337031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.014 [2024-07-23 03:33:45.337484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-23 03:33:45.337515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.014 [2024-07-23 03:33:45.337533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.014 [2024-07-23 03:33:45.337782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.014 [2024-07-23 03:33:45.338026] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.014 [2024-07-23 03:33:45.338050] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.014 [2024-07-23 03:33:45.338065] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.014 [2024-07-23 03:33:45.341663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.014 [2024-07-23 03:33:45.350994] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.014 [2024-07-23 03:33:45.351454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-23 03:33:45.351484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.014 [2024-07-23 03:33:45.351502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.014 [2024-07-23 03:33:45.351752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.014 [2024-07-23 03:33:45.351996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.014 [2024-07-23 03:33:45.352019] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.014 [2024-07-23 03:33:45.352035] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.014 [2024-07-23 03:33:45.355628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.014 [2024-07-23 03:33:45.364957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.014 [2024-07-23 03:33:45.365407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-23 03:33:45.365438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.014 [2024-07-23 03:33:45.365456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.014 [2024-07-23 03:33:45.365711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.014 [2024-07-23 03:33:45.365955] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.014 [2024-07-23 03:33:45.365978] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.014 [2024-07-23 03:33:45.365993] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.014 [2024-07-23 03:33:45.369582] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.014 [2024-07-23 03:33:45.378939] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.014 [2024-07-23 03:33:45.379370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-23 03:33:45.379401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.014 [2024-07-23 03:33:45.379419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.014 [2024-07-23 03:33:45.379668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.014 [2024-07-23 03:33:45.379912] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.014 [2024-07-23 03:33:45.379936] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.014 [2024-07-23 03:33:45.379951] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.014 [2024-07-23 03:33:45.383543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.014 [2024-07-23 03:33:45.392878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.014 [2024-07-23 03:33:45.393304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-23 03:33:45.393335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.014 [2024-07-23 03:33:45.393353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.014 [2024-07-23 03:33:45.393592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.014 [2024-07-23 03:33:45.393846] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.014 [2024-07-23 03:33:45.393870] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.014 [2024-07-23 03:33:45.393885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.014 [2024-07-23 03:33:45.397476] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.014 [2024-07-23 03:33:45.406818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.014 [2024-07-23 03:33:45.407244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-23 03:33:45.407274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.014 [2024-07-23 03:33:45.407291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.014 [2024-07-23 03:33:45.407530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.014 [2024-07-23 03:33:45.407784] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.014 [2024-07-23 03:33:45.407808] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.014 [2024-07-23 03:33:45.407829] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.014 [2024-07-23 03:33:45.411417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.014 [2024-07-23 03:33:45.420770] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.014 [2024-07-23 03:33:45.421229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.014 [2024-07-23 03:33:45.421259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.014 [2024-07-23 03:33:45.421278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.014 [2024-07-23 03:33:45.421516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.014 [2024-07-23 03:33:45.421770] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.014 [2024-07-23 03:33:45.421794] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.014 [2024-07-23 03:33:45.421809] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.014 [2024-07-23 03:33:45.425395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.434728] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.435151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.435181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.435199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.435438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.435692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.435716] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.435731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.439318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.448652] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.449104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.449134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.449152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.449391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.449646] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.449670] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.449685] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.453273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.462602] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.463064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.463100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.463118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.463358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.463600] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.463634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.463650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.467235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.476564] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.476993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.477023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.477042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.477280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.477524] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.477547] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.477562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.481159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.490487] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.490920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.490951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.490969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.491207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.491450] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.491473] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.491488] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.495092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.504421] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.504887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.504918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.504936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.505175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.505424] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.505448] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.505463] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.509069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.518399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.518866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.518897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.518915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.519154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.519397] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.519420] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.519435] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.523032] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.532360] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.532792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.532823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.532840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.533079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.533323] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.533346] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.533361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.536961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.546320] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.546775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.546807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.546825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.547064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.547308] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.547331] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.547347] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.550946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.560288] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.560749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.015 [2024-07-23 03:33:45.560781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.015 [2024-07-23 03:33:45.560799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.015 [2024-07-23 03:33:45.561039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.015 [2024-07-23 03:33:45.561282] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.015 [2024-07-23 03:33:45.561305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.015 [2024-07-23 03:33:45.561320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.015 [2024-07-23 03:33:45.564921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.015 [2024-07-23 03:33:45.574253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.015 [2024-07-23 03:33:45.574727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-23 03:33:45.574769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.016 [2024-07-23 03:33:45.574787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.016 [2024-07-23 03:33:45.575027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.016 [2024-07-23 03:33:45.575271] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.016 [2024-07-23 03:33:45.575294] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.016 [2024-07-23 03:33:45.575310] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.016 [2024-07-23 03:33:45.578908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.016 [2024-07-23 03:33:45.588242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.016 [2024-07-23 03:33:45.588691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.016 [2024-07-23 03:33:45.588722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.016 [2024-07-23 03:33:45.588740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.275 [2024-07-23 03:33:45.588980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.275 [2024-07-23 03:33:45.589223] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.275 [2024-07-23 03:33:45.589246] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.275 [2024-07-23 03:33:45.589261] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.275 [2024-07-23 03:33:45.592863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.275 [2024-07-23 03:33:45.602192] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.275 [2024-07-23 03:33:45.602625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-23 03:33:45.602656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.275 [2024-07-23 03:33:45.602679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.275 [2024-07-23 03:33:45.602920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.275 [2024-07-23 03:33:45.603164] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.275 [2024-07-23 03:33:45.603187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.275 [2024-07-23 03:33:45.603202] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.275 [2024-07-23 03:33:45.606804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.275 [2024-07-23 03:33:45.616138] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.275 [2024-07-23 03:33:45.616604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-23 03:33:45.616641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.275 [2024-07-23 03:33:45.616660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.275 [2024-07-23 03:33:45.616899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.275 [2024-07-23 03:33:45.617142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.275 [2024-07-23 03:33:45.617165] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.275 [2024-07-23 03:33:45.617180] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.275 [2024-07-23 03:33:45.620777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.275 [2024-07-23 03:33:45.630110] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.275 [2024-07-23 03:33:45.630579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-23 03:33:45.630609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.275 [2024-07-23 03:33:45.630637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.275 [2024-07-23 03:33:45.630877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.275 [2024-07-23 03:33:45.631121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.275 [2024-07-23 03:33:45.631144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.275 [2024-07-23 03:33:45.631158] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.275 [2024-07-23 03:33:45.634750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.275 [2024-07-23 03:33:45.644077] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.275 [2024-07-23 03:33:45.644507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-23 03:33:45.644538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.275 [2024-07-23 03:33:45.644555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.275 [2024-07-23 03:33:45.644806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.275 [2024-07-23 03:33:45.645049] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.275 [2024-07-23 03:33:45.645078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.275 [2024-07-23 03:33:45.645094] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.275 [2024-07-23 03:33:45.648689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.275 [2024-07-23 03:33:45.658017] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.275 [2024-07-23 03:33:45.658486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-23 03:33:45.658517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.275 [2024-07-23 03:33:45.658534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.275 [2024-07-23 03:33:45.658784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.275 [2024-07-23 03:33:45.659028] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.659051] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.659067] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.662660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.671989] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.672451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.672481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.672499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.672749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.672993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.673016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.673031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.676628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.685953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.686422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.686453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.686471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.686723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.686967] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.686990] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.687005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.690593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.699930] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.700330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.700360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.700378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.700627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.700871] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.700894] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.700910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.704500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.713836] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.714268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.714298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.714316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.714555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.714808] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.714832] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.714847] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.718434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.727774] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.728206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.728236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.728254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.728493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.728747] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.728771] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.728786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.732373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.741712] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.742163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.742194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.742211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.742456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.742712] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.742735] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.742750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.746336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.755671] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.756100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.756130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.756148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.756387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.756641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.756664] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.756680] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.760265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.769588] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.770053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.770084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.770102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.770340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.770584] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.770608] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.770633] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.774222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.783545] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.784027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.784058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.784075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.784315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.784558] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.784581] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.784601] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.276 [2024-07-23 03:33:45.788201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.276 [2024-07-23 03:33:45.797528] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.276 [2024-07-23 03:33:45.797983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-23 03:33:45.798014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.276 [2024-07-23 03:33:45.798032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.276 [2024-07-23 03:33:45.798270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.276 [2024-07-23 03:33:45.798514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.276 [2024-07-23 03:33:45.798537] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.276 [2024-07-23 03:33:45.798552] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.277 [2024-07-23 03:33:45.802148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.277 [2024-07-23 03:33:45.811479] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.277 [2024-07-23 03:33:45.811934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-23 03:33:45.811964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.277 [2024-07-23 03:33:45.811982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.277 [2024-07-23 03:33:45.812221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.277 [2024-07-23 03:33:45.812464] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.277 [2024-07-23 03:33:45.812488] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.277 [2024-07-23 03:33:45.812503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.277 [2024-07-23 03:33:45.816099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.277 [2024-07-23 03:33:45.825427] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.277 [2024-07-23 03:33:45.825862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-23 03:33:45.825893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.277 [2024-07-23 03:33:45.825911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.277 [2024-07-23 03:33:45.826149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.277 [2024-07-23 03:33:45.826393] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.277 [2024-07-23 03:33:45.826416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.277 [2024-07-23 03:33:45.826431] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.277 [2024-07-23 03:33:45.830029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.277 [2024-07-23 03:33:45.839355] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.277 [2024-07-23 03:33:45.839821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-23 03:33:45.839857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.277 [2024-07-23 03:33:45.839875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.277 [2024-07-23 03:33:45.840114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.277 [2024-07-23 03:33:45.840357] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.277 [2024-07-23 03:33:45.840381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.277 [2024-07-23 03:33:45.840396] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.277 [2024-07-23 03:33:45.843993] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.536 [2024-07-23 03:33:45.853321] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.536 [2024-07-23 03:33:45.853772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-23 03:33:45.853803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.536 [2024-07-23 03:33:45.853821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.536 [2024-07-23 03:33:45.854061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.536 [2024-07-23 03:33:45.854304] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.536 [2024-07-23 03:33:45.854327] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.536 [2024-07-23 03:33:45.854342] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.536 [2024-07-23 03:33:45.857938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.536 [2024-07-23 03:33:45.867265] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.536 [2024-07-23 03:33:45.867720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-23 03:33:45.867752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.536 [2024-07-23 03:33:45.867770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.536 [2024-07-23 03:33:45.868009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.536 [2024-07-23 03:33:45.868252] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.536 [2024-07-23 03:33:45.868275] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.536 [2024-07-23 03:33:45.868290] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.536 [2024-07-23 03:33:45.871885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.536 [2024-07-23 03:33:45.881218] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.536 [2024-07-23 03:33:45.881654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-23 03:33:45.881685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.536 [2024-07-23 03:33:45.881704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.536 [2024-07-23 03:33:45.881943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.536 [2024-07-23 03:33:45.882192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.536 [2024-07-23 03:33:45.882216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.536 [2024-07-23 03:33:45.882231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.536 [2024-07-23 03:33:45.885829] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.536 [2024-07-23 03:33:45.895157] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.536 [2024-07-23 03:33:45.895626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-23 03:33:45.895657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.536 [2024-07-23 03:33:45.895674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.536 [2024-07-23 03:33:45.895913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.536 [2024-07-23 03:33:45.896156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.536 [2024-07-23 03:33:45.896179] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.536 [2024-07-23 03:33:45.896195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.536 [2024-07-23 03:33:45.899792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.536 [2024-07-23 03:33:45.909369] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.536 [2024-07-23 03:33:45.909843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:45.909873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:45.909891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:45.910130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:45.910373] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:45.910396] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:45.910412] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:45.914012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:45.923338] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:45.923773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:45.923804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:45.923822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:45.924061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:45.924305] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:45.924328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:45.924343] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:45.927941] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:45.937277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:45.937732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:45.937764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:45.937781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:45.938021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:45.938265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:45.938288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:45.938303] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:45.941900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:45.951230] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:45.951658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:45.951690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:45.951708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:45.951947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:45.952191] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:45.952214] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:45.952229] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:45.955830] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:45.965168] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:45.965629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:45.965660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:45.965678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:45.965917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:45.966160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:45.966183] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:45.966199] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:45.969794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:45.979128] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:45.979557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:45.979587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:45.979611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:45.979862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:45.980106] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:45.980129] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:45.980144] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:45.983740] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:45.993071] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:45.993513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:45.993544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:45.993562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:45.993813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:45.994057] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:45.994081] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:45.994096] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:45.997692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:46.007027] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:46.007473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:46.007504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:46.007521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:46.007772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:46.008016] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:46.008040] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:46.008055] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:46.011649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:46.020980] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:46.021417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:46.021448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:46.021466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:46.021717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:46.021960] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:46.021993] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:46.022008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:46.025596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:46.034937] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:46.035378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-23 03:33:46.035409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.537 [2024-07-23 03:33:46.035427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.537 [2024-07-23 03:33:46.035675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.537 [2024-07-23 03:33:46.035918] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.537 [2024-07-23 03:33:46.035942] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.537 [2024-07-23 03:33:46.035957] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.537 [2024-07-23 03:33:46.039545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.537 [2024-07-23 03:33:46.048904] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.537 [2024-07-23 03:33:46.049356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-23 03:33:46.049388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.538 [2024-07-23 03:33:46.049406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.538 [2024-07-23 03:33:46.049655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.538 [2024-07-23 03:33:46.049899] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.538 [2024-07-23 03:33:46.049922] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.538 [2024-07-23 03:33:46.049937] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.538 [2024-07-23 03:33:46.053532] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.538 [2024-07-23 03:33:46.062881] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.538 [2024-07-23 03:33:46.063305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-23 03:33:46.063337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.538 [2024-07-23 03:33:46.063355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.538 [2024-07-23 03:33:46.063594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.538 [2024-07-23 03:33:46.063847] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.538 [2024-07-23 03:33:46.063877] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.538 [2024-07-23 03:33:46.063892] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.538 [2024-07-23 03:33:46.067492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.538 [2024-07-23 03:33:46.076877] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.538 [2024-07-23 03:33:46.077306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-23 03:33:46.077337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.538 [2024-07-23 03:33:46.077355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.538 [2024-07-23 03:33:46.077594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.538 [2024-07-23 03:33:46.077845] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.538 [2024-07-23 03:33:46.077869] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.538 [2024-07-23 03:33:46.077884] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.538 [2024-07-23 03:33:46.081477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.538 [2024-07-23 03:33:46.090839] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.538 [2024-07-23 03:33:46.091291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-23 03:33:46.091322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.538 [2024-07-23 03:33:46.091340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.538 [2024-07-23 03:33:46.091579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.538 [2024-07-23 03:33:46.091831] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.538 [2024-07-23 03:33:46.091856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.538 [2024-07-23 03:33:46.091871] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.538 [2024-07-23 03:33:46.095463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.538 [2024-07-23 03:33:46.104812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.538 [2024-07-23 03:33:46.105238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-23 03:33:46.105268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.538 [2024-07-23 03:33:46.105286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.538 [2024-07-23 03:33:46.105525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.538 [2024-07-23 03:33:46.105778] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.538 [2024-07-23 03:33:46.105802] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.538 [2024-07-23 03:33:46.105817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.538 [2024-07-23 03:33:46.109408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.118750] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.119203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.119234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.119252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.119496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.119749] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.119773] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.119788] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.123377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.132726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.133152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.133183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.133200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.133439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.133694] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.133717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.133733] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.137322] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.146657] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.147076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.147107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.147125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.147364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.147606] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.147639] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.147655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.151244] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.160600] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.161106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.161136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.161153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.161392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.161646] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.161670] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.161692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.165288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.174638] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.175067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.175099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.175117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.175358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.175601] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.175637] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.175655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.179246] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.188631] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.189060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.189091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.189109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.189349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.189593] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.189625] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.189643] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.193234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.202571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.203035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.203066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.203084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.203323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.203566] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.203589] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.203605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.207210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.216546] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.217042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.217078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.217096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.217335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.217578] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.217601] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.217629] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.221223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.230577] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.230993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.231024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.231042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.231281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.231523] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.231546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.798 [2024-07-23 03:33:46.231561] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.798 [2024-07-23 03:33:46.235162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.798 [2024-07-23 03:33:46.244492] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.798 [2024-07-23 03:33:46.244929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-23 03:33:46.244959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.798 [2024-07-23 03:33:46.244977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.798 [2024-07-23 03:33:46.245216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.798 [2024-07-23 03:33:46.245459] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.798 [2024-07-23 03:33:46.245483] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.245498] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.249133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.258470] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.258884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.258915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.258933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.259172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.259421] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.259445] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.259460] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.263054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.272384] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.272820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.272850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.272868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.273106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.273349] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.273372] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.273388] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.276985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.286313] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.286742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.286774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.286793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.287033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.287277] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.287300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.287316] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.290911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.300246] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.300693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.300724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.300742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.300982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.301225] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.301248] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.301263] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.304872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.314208] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.314660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.314692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.314710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.314949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.315192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.315215] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.315231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.318826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.328157] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.328587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.328627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.328647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.328886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.329130] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.329153] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.329168] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.332766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.342093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.342526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.342557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.342575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.342825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.343070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.343093] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.343108] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.346703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.356028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.356487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.356518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.356541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.356794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.357038] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.357061] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.357076] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.799 [2024-07-23 03:33:46.360669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.799 [2024-07-23 03:33:46.369998] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.799 [2024-07-23 03:33:46.370425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-23 03:33:46.370455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:19.799 [2024-07-23 03:33:46.370473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:19.799 [2024-07-23 03:33:46.370723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:19.799 [2024-07-23 03:33:46.370968] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.799 [2024-07-23 03:33:46.370991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.799 [2024-07-23 03:33:46.371006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.374595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.383930] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.384359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.384390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.384407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.384655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.384899] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.384923] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.384938] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.388523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.397860] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.398285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.398316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.398333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.398573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.398825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.398856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.398873] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.402459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.411792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.412263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.412294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.412311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.412550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.412805] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.412828] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.412844] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.416436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.425771] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.426193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.426224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.426241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.426480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.426735] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.426758] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.426773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.430358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.439705] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.440151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.440181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.440199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.440437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.440692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.440716] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.440731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.444318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.453647] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.454076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.454106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.454124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.454363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.454606] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.454639] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.454655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.458241] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.467562] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.467995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.468027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.468045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.468284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.468528] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.468550] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.468566] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.472164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.481455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.481924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.481957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.481975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.482215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.482458] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.482482] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.482497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.486099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.495428] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.495838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.495869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.495887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.060 [2024-07-23 03:33:46.496132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.060 [2024-07-23 03:33:46.496376] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.060 [2024-07-23 03:33:46.496400] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.060 [2024-07-23 03:33:46.496415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.060 [2024-07-23 03:33:46.500014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.060 [2024-07-23 03:33:46.509355] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.060 [2024-07-23 03:33:46.509822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.060 [2024-07-23 03:33:46.509853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.060 [2024-07-23 03:33:46.509871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.510111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.510354] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.510377] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.510392] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.513987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-23 03:33:46.523320] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-23 03:33:46.523742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-23 03:33:46.523773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-23 03:33:46.523790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.524029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.524273] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.524296] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.524312] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.527910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-23 03:33:46.537236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-23 03:33:46.537721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-23 03:33:46.537753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-23 03:33:46.537771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.538011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.538255] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.538278] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.538299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.541897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-23 03:33:46.551226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-23 03:33:46.551675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-23 03:33:46.551706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-23 03:33:46.551724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.551963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.552206] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.552229] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.552244] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.555851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-23 03:33:46.565191] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-23 03:33:46.565723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-23 03:33:46.565754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-23 03:33:46.565772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.566011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.566254] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.566278] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.566293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.570060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-23 03:33:46.579195] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-23 03:33:46.579649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-23 03:33:46.579680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-23 03:33:46.579698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.579937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.580181] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.580204] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.580219] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.583822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-23 03:33:46.593158] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-23 03:33:46.593628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-23 03:33:46.593663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-23 03:33:46.593682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.593921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.594165] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.594188] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.594203] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.597808] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-23 03:33:46.607147] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-23 03:33:46.607599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-23 03:33:46.607638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-23 03:33:46.607657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.607896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.608140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.608163] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.608178] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.611781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.061 [2024-07-23 03:33:46.621121] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.061 [2024-07-23 03:33:46.621592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.061 [2024-07-23 03:33:46.621630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.061 [2024-07-23 03:33:46.621650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.061 [2024-07-23 03:33:46.621890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.061 [2024-07-23 03:33:46.622133] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.061 [2024-07-23 03:33:46.622156] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.061 [2024-07-23 03:33:46.622171] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.061 [2024-07-23 03:33:46.625769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.321 [2024-07-23 03:33:46.635152] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.321 [2024-07-23 03:33:46.635610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.321 [2024-07-23 03:33:46.635648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.321 [2024-07-23 03:33:46.635667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.321 [2024-07-23 03:33:46.635906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.321 [2024-07-23 03:33:46.636156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.321 [2024-07-23 03:33:46.636179] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.321 [2024-07-23 03:33:46.636194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.321 [2024-07-23 03:33:46.639797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.321 [2024-07-23 03:33:46.649137] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.321 [2024-07-23 03:33:46.649562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.321 [2024-07-23 03:33:46.649593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.321 [2024-07-23 03:33:46.649611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.321 [2024-07-23 03:33:46.649859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.321 [2024-07-23 03:33:46.650102] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.321 [2024-07-23 03:33:46.650126] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.321 [2024-07-23 03:33:46.650141] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.321 [2024-07-23 03:33:46.653740] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.321 [2024-07-23 03:33:46.663076] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.321 [2024-07-23 03:33:46.663531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.321 [2024-07-23 03:33:46.663562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.321 [2024-07-23 03:33:46.663580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.321 [2024-07-23 03:33:46.663829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.321 [2024-07-23 03:33:46.664074] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.321 [2024-07-23 03:33:46.664097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.321 [2024-07-23 03:33:46.664112] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.321 [2024-07-23 03:33:46.667712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.321 [2024-07-23 03:33:46.677048] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.321 [2024-07-23 03:33:46.677472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.321 [2024-07-23 03:33:46.677503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.321 [2024-07-23 03:33:46.677521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.321 [2024-07-23 03:33:46.677771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.321 [2024-07-23 03:33:46.678015] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.321 [2024-07-23 03:33:46.678039] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.321 [2024-07-23 03:33:46.678054] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.321 [2024-07-23 03:33:46.681661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.321 [2024-07-23 03:33:46.690996] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.321 [2024-07-23 03:33:46.691451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.321 [2024-07-23 03:33:46.691482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.321 [2024-07-23 03:33:46.691500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.321 [2024-07-23 03:33:46.691750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.321 [2024-07-23 03:33:46.691994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.321 [2024-07-23 03:33:46.692017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.321 [2024-07-23 03:33:46.692033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.321 [2024-07-23 03:33:46.695658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.321 [2024-07-23 03:33:46.704992] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.321 [2024-07-23 03:33:46.705440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.321 [2024-07-23 03:33:46.705471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.321 [2024-07-23 03:33:46.705488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.321 [2024-07-23 03:33:46.705739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.321 [2024-07-23 03:33:46.705983] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.321 [2024-07-23 03:33:46.706006] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.321 [2024-07-23 03:33:46.706022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.321 [2024-07-23 03:33:46.709622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.321 [2024-07-23 03:33:46.718952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.321 [2024-07-23 03:33:46.719515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.321 [2024-07-23 03:33:46.719584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.321 [2024-07-23 03:33:46.719602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.321 [2024-07-23 03:33:46.719851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.321 [2024-07-23 03:33:46.720096] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.321 [2024-07-23 03:33:46.720119] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.321 [2024-07-23 03:33:46.720134] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.723733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.732850] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.733310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.733342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.733368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.733609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.733864] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.733888] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.733903] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.737492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.746834] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.747258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.747288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.747306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.747545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.747799] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.747823] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.747838] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.751428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.760770] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.761216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.761247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.761264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.761503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.761757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.761780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.761795] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.765382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.774725] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.775184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.775214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.775231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.775469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.775725] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.775754] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.775770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.779362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.788706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.789156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.789186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.789204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.789443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.789700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.789724] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.789739] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.793330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.802674] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.803108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.803139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.803157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.803395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.803651] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.803674] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.803690] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.807285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.816636] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.817065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.817096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.817114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.817352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.817596] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.817631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.817648] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.821238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.830575] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.831063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.831094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.831111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.831350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.831593] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.831629] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.831647] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.835242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 [2024-07-23 03:33:46.844581] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 [2024-07-23 03:33:46.845039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.322 [2024-07-23 03:33:46.845070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.322 [2024-07-23 03:33:46.845087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.322 [2024-07-23 03:33:46.845326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.322 [2024-07-23 03:33:46.845569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.322 [2024-07-23 03:33:46.845592] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.322 [2024-07-23 03:33:46.845606] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.322 [2024-07-23 03:33:46.849210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 592727 Killed "${NVMF_APP[@]}" "$@" 00:34:20.322 03:33:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:20.322 03:33:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:20.322 03:33:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:20.322 03:33:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:20.322 03:33:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.322 [2024-07-23 03:33:46.858568] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.322 03:33:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=593677 00:34:20.322 03:33:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:20.322 03:33:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 593677 00:34:20.322 [2024-07-23 03:33:46.859028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-23 03:33:46.859059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-23 03:33:46.859076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.323 03:33:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 593677 ']' 00:34:20.323 03:33:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.323 [2024-07-23 03:33:46.859316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.323 03:33:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:20.323 03:33:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.323 [2024-07-23 03:33:46.859565] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-23 03:33:46.859589] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-23 03:33:46.859604] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 03:33:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:20.323 03:33:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.323 [2024-07-23 03:33:46.863213] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-23 03:33:46.872546] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-23 03:33:46.873015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-23 03:33:46.873046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-23 03:33:46.873065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.323 [2024-07-23 03:33:46.873303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.323 [2024-07-23 03:33:46.873547] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-23 03:33:46.873570] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-23 03:33:46.873585] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-23 03:33:46.877183] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.323 [2024-07-23 03:33:46.886516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.323 [2024-07-23 03:33:46.886982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.323 [2024-07-23 03:33:46.887013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.323 [2024-07-23 03:33:46.887031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.323 [2024-07-23 03:33:46.887270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.323 [2024-07-23 03:33:46.887513] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.323 [2024-07-23 03:33:46.887536] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.323 [2024-07-23 03:33:46.887551] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.323 [2024-07-23 03:33:46.891148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.583 [2024-07-23 03:33:46.900482] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.583 [2024-07-23 03:33:46.900958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-23 03:33:46.900990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.583 [2024-07-23 03:33:46.901008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.583 [2024-07-23 03:33:46.901247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.583 [2024-07-23 03:33:46.901495] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.583 [2024-07-23 03:33:46.901519] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.583 [2024-07-23 03:33:46.901534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.583 [2024-07-23 03:33:46.905138] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.583 [2024-07-23 03:33:46.910355] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:20.583 [2024-07-23 03:33:46.910444] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.583 [2024-07-23 03:33:46.914466] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.583 [2024-07-23 03:33:46.914900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-23 03:33:46.914931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.583 [2024-07-23 03:33:46.914949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.583 [2024-07-23 03:33:46.915188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.583 [2024-07-23 03:33:46.915431] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.583 [2024-07-23 03:33:46.915455] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.583 [2024-07-23 03:33:46.915471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.583 [2024-07-23 03:33:46.919070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.583 [2024-07-23 03:33:46.928601] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.583 [2024-07-23 03:33:46.929061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-23 03:33:46.929092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.583 [2024-07-23 03:33:46.929111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.583 [2024-07-23 03:33:46.929350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.583 [2024-07-23 03:33:46.929594] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.583 [2024-07-23 03:33:46.929625] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.583 [2024-07-23 03:33:46.929643] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.583 [2024-07-23 03:33:46.933233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.583 [2024-07-23 03:33:46.942569] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.583 [2024-07-23 03:33:46.943030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-23 03:33:46.943061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.583 [2024-07-23 03:33:46.943079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.583 [2024-07-23 03:33:46.943318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.583 [2024-07-23 03:33:46.943567] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.583 [2024-07-23 03:33:46.943591] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.583 [2024-07-23 03:33:46.943606] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.583 [2024-07-23 03:33:46.947206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.583 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.583 [2024-07-23 03:33:46.956547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.583 [2024-07-23 03:33:46.956990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-23 03:33:46.957021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.583 [2024-07-23 03:33:46.957038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.583 [2024-07-23 03:33:46.957277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.583 [2024-07-23 03:33:46.957520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.583 [2024-07-23 03:33:46.957544] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.583 [2024-07-23 03:33:46.957559] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.583 [2024-07-23 03:33:46.961161] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.583 [2024-07-23 03:33:46.970142] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.583 [2024-07-23 03:33:46.970623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-23 03:33:46.970661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.583 [2024-07-23 03:33:46.970677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.583 [2024-07-23 03:33:46.970892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.583 [2024-07-23 03:33:46.971125] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.583 [2024-07-23 03:33:46.971145] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.583 [2024-07-23 03:33:46.971159] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.583 [2024-07-23 03:33:46.974372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.583 [2024-07-23 03:33:46.983665] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.583 [2024-07-23 03:33:46.984048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-23 03:33:46.984075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.583 [2024-07-23 03:33:46.984092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.583 [2024-07-23 03:33:46.984335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.583 [2024-07-23 03:33:46.984541] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.583 [2024-07-23 03:33:46.984560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.583 [2024-07-23 03:33:46.984573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.583 [2024-07-23 03:33:46.987228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:20.583 [2024-07-23 03:33:46.987946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.583 [2024-07-23 03:33:46.997336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:46.997953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:46.997989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:46.998011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:46.998254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:46.998504] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:46.998525] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:46.998543] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.002012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.011216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.011737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.011770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:47.011790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:47.012039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:47.012267] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:47.012305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:47.012321] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.015657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.024969] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.025423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.025452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:47.025468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:47.025696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:47.025931] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:47.025952] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:47.025980] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.029293] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.038626] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.039144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.039184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:47.039203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:47.039449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:47.039711] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:47.039734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:47.039750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.043074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.052292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.052856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.052892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:47.052913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:47.053165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:47.053378] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:47.053398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:47.053415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.056702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.065957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.066476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.066504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:47.066520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:47.066777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:47.066997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:47.067034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:47.067048] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.070366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.079516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.079981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.080009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:47.080026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:47.080272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:47.080486] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:47.080507] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:47.080520] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.083151] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.584 [2024-07-23 03:33:47.083186] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.584 [2024-07-23 03:33:47.083214] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.584 [2024-07-23 03:33:47.083226] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.584 [2024-07-23 03:33:47.083236] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.584 [2024-07-23 03:33:47.083330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.584 [2024-07-23 03:33:47.083510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:20.584 [2024-07-23 03:33:47.083513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.584 [2024-07-23 03:33:47.083862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.093140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.093695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.093732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:47.093753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:47.093980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:47.094207] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:47.094229] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:47.094247] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.097619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.106786] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.107364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.107402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.584 [2024-07-23 03:33:47.107424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.584 [2024-07-23 03:33:47.107664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.584 [2024-07-23 03:33:47.107892] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.584 [2024-07-23 03:33:47.107915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.584 [2024-07-23 03:33:47.107933] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.584 [2024-07-23 03:33:47.111275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.584 [2024-07-23 03:33:47.120488] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.584 [2024-07-23 03:33:47.121071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-23 03:33:47.121109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-23 03:33:47.121139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.585 [2024-07-23 03:33:47.121366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.585 [2024-07-23 03:33:47.121595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-23 03:33:47.121625] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-23 03:33:47.121645] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-23 03:33:47.125049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-23 03:33:47.134460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-23 03:33:47.135057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-23 03:33:47.135094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-23 03:33:47.135117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.585 [2024-07-23 03:33:47.135359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.585 [2024-07-23 03:33:47.135600] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-23 03:33:47.135632] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-23 03:33:47.135651] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-23 03:33:47.138989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.585 [2024-07-23 03:33:47.148256] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.585 [2024-07-23 03:33:47.148715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-23 03:33:47.148750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.585 [2024-07-23 03:33:47.148771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.585 [2024-07-23 03:33:47.149012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.585 [2024-07-23 03:33:47.149250] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.585 [2024-07-23 03:33:47.149272] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.585 [2024-07-23 03:33:47.149290] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.585 [2024-07-23 03:33:47.152573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.873 [2024-07-23 03:33:47.161886] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.873 [2024-07-23 03:33:47.162379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.162428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.874 [2024-07-23 03:33:47.162459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.874 [2024-07-23 03:33:47.162733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.874 [2024-07-23 03:33:47.162974] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.874 [2024-07-23 03:33:47.163005] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.874 [2024-07-23 03:33:47.163025] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.874 [2024-07-23 03:33:47.166355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.874 [2024-07-23 03:33:47.175621] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.874 [2024-07-23 03:33:47.176077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.176108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.874 [2024-07-23 03:33:47.176127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.874 [2024-07-23 03:33:47.176355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.874 [2024-07-23 03:33:47.176576] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.874 [2024-07-23 03:33:47.176597] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.874 [2024-07-23 03:33:47.176623] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.874 [2024-07-23 03:33:47.179980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.874 [2024-07-23 03:33:47.189337] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.874 [2024-07-23 03:33:47.189731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.189760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.874 [2024-07-23 03:33:47.189776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.874 [2024-07-23 03:33:47.189993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.874 [2024-07-23 03:33:47.190213] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.874 [2024-07-23 03:33:47.190234] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.874 [2024-07-23 03:33:47.190248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.874 [2024-07-23 03:33:47.193507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.874 [2024-07-23 03:33:47.203093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.874 [2024-07-23 03:33:47.203499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.203528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.874 [2024-07-23 03:33:47.203544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.874 [2024-07-23 03:33:47.203769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.874 [2024-07-23 03:33:47.204002] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.874 [2024-07-23 03:33:47.204030] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.874 [2024-07-23 03:33:47.204046] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.874 [2024-07-23 03:33:47.207332] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.874 [2024-07-23 03:33:47.216874] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.874 [2024-07-23 03:33:47.217281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.217309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.874 [2024-07-23 03:33:47.217325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.874 [2024-07-23 03:33:47.217540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.874 [2024-07-23 03:33:47.217770] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.874 [2024-07-23 03:33:47.217792] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.874 [2024-07-23 03:33:47.217806] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.874 [2024-07-23 03:33:47.221083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.874 [2024-07-23 03:33:47.228068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.874 [2024-07-23 03:33:47.230703] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.874 [2024-07-23 03:33:47.231114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.231142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.874 [2024-07-23 03:33:47.231158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.874 [2024-07-23 03:33:47.231388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.874 [2024-07-23 03:33:47.231629] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.874 [2024-07-23 03:33:47.231651] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.874 [2024-07-23 03:33:47.231665] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.874 [2024-07-23 03:33:47.235012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.874 [2024-07-23 03:33:47.244300] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.874 [2024-07-23 03:33:47.244740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.244770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.874 [2024-07-23 03:33:47.244791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.874 [2024-07-23 03:33:47.245036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.874 [2024-07-23 03:33:47.245242] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.874 [2024-07-23 03:33:47.245261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.874 [2024-07-23 03:33:47.245274] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.874 [2024-07-23 03:33:47.248463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.874 [2024-07-23 03:33:47.258049] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.874 [2024-07-23 03:33:47.258653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.258691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.874 [2024-07-23 03:33:47.258712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.874 [2024-07-23 03:33:47.258939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.874 [2024-07-23 03:33:47.259179] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.874 [2024-07-23 03:33:47.259202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.874 [2024-07-23 03:33:47.259219] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.874 [2024-07-23 03:33:47.262565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.874 Malloc0 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.874 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.874 [2024-07-23 03:33:47.271710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.874 [2024-07-23 03:33:47.272166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.874 [2024-07-23 03:33:47.272195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.875 [2024-07-23 03:33:47.272215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.875 [2024-07-23 03:33:47.272447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.875 [2024-07-23 03:33:47.272690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.875 [2024-07-23 03:33:47.272713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.875 [2024-07-23 03:33:47.272729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.875 [2024-07-23 03:33:47.276092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.875 [2024-07-23 03:33:47.285323] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.875 [2024-07-23 03:33:47.285720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.875 [2024-07-23 03:33:47.285749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0b1e0 with addr=10.0.0.2, port=4420 00:34:20.875 [2024-07-23 03:33:47.285765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0b1e0 is same with the state(5) to be set 00:34:20.875 [2024-07-23 03:33:47.285981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0b1e0 (9): Bad file descriptor 00:34:20.875 [2024-07-23 03:33:47.286200] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.875 [2024-07-23 03:33:47.286221] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.875 [2024-07-23 03:33:47.286235] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.875 [2024-07-23 03:33:47.288434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.875 [2024-07-23 03:33:47.289640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.875 03:33:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 593011 00:34:20.875 [2024-07-23 03:33:47.298890] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.875 [2024-07-23 03:33:47.416151] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:30.844 00:34:30.844 Latency(us) 00:34:30.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.844 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:30.844 Verification LBA range: start 0x0 length 0x4000 00:34:30.844 Nvme1n1 : 15.01 6797.47 26.55 8690.69 0.00 8238.97 849.54 20583.16 00:34:30.844 =================================================================================================================== 00:34:30.844 Total : 6797.47 26.55 8690.69 0.00 8238.97 849.54 20583.16 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:30.844 rmmod nvme_tcp 00:34:30.844 rmmod nvme_fabrics 00:34:30.844 rmmod nvme_keyring 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 593677 ']' 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 593677 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 593677 ']' 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 593677 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 593677 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 593677' 00:34:30.844 killing process with pid 593677 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 593677 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 593677 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:30.844 03:33:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.749 03:33:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:32.749 00:34:32.749 real 0m22.387s 00:34:32.749 user 0m59.599s 00:34:32.749 sys 0m4.453s 00:34:32.749 03:33:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:32.749 03:33:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:32.749 ************************************ 00:34:32.749 END TEST nvmf_bdevperf 00:34:32.749 ************************************ 00:34:32.749 03:33:59 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:32.749 03:33:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:32.749 03:33:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:32.749 03:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.749 ************************************ 00:34:32.749 START TEST nvmf_target_disconnect 00:34:32.749 ************************************ 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:32.749 * Looking for test storage... 00:34:32.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:32.749 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:32.750 03:33:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:34.652 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:34.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:34.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:34.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:34.653 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.653 03:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:34.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:34:34.653 00:34:34.653 --- 10.0.0.2 ping statistics --- 00:34:34.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.653 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:34:34.653 00:34:34.653 --- 10.0.0.1 ping statistics --- 00:34:34.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.653 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:34.653 ************************************ 00:34:34.653 START TEST nvmf_target_disconnect_tc1 00:34:34.653 ************************************ 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:34.653 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.653 [2024-07-23 03:34:01.189410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.653 [2024-07-23 03:34:01.189508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc740 with addr=10.0.0.2, port=4420 00:34:34.653 [2024-07-23 03:34:01.189551] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:34.653 [2024-07-23 03:34:01.189579] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:34.653 [2024-07-23 03:34:01.189603] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:34.653 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:34.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:34.653 Initializing NVMe Controllers 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:34.653 00:34:34.653 real 0m0.090s 00:34:34.653 user 0m0.038s 00:34:34.653 sys 0m0.051s 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:34.653 ************************************ 00:34:34.653 END TEST nvmf_target_disconnect_tc1 00:34:34.653 ************************************ 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:34.653 03:34:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:34.911 ************************************ 00:34:34.911 START TEST nvmf_target_disconnect_tc2 00:34:34.911 ************************************ 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=596822 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 596822 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 596822 ']' 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:34.911 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.911 [2024-07-23 03:34:01.290422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:34.911 [2024-07-23 03:34:01.290492] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.911 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.911 [2024-07-23 03:34:01.355027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:34.911 [2024-07-23 03:34:01.440968] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.911 [2024-07-23 03:34:01.441028] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.911 [2024-07-23 03:34:01.441056] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.911 [2024-07-23 03:34:01.441067] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.911 [2024-07-23 03:34:01.441076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.911 [2024-07-23 03:34:01.441165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:34.911 [2024-07-23 03:34:01.441422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:34.911 [2024-07-23 03:34:01.441484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:34.911 [2024-07-23 03:34:01.441481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.169 Malloc0 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.169 [2024-07-23 03:34:01.604069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.169 [2024-07-23 03:34:01.632300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=596850 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:35.169 03:34:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:35.169 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.720 03:34:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 596822 00:34:37.720 03:34:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Write completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 [2024-07-23 03:34:03.656656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.720 starting I/O failed 00:34:37.720 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 [2024-07-23 03:34:03.656958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 [2024-07-23 03:34:03.657262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Write completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 Read completed with error (sct=0, sc=8) 00:34:37.721 starting I/O failed 00:34:37.721 [2024-07-23 03:34:03.657572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:37.721 [2024-07-23 03:34:03.657770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.721 [2024-07-23 03:34:03.657803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.721 qpair failed and we were unable to recover it. 00:34:37.721 [2024-07-23 03:34:03.657956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.721 [2024-07-23 03:34:03.657983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.721 qpair failed and we were unable to recover it. 00:34:37.721 [2024-07-23 03:34:03.658139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.721 [2024-07-23 03:34:03.658165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.721 qpair failed and we were unable to recover it. 00:34:37.721 [2024-07-23 03:34:03.658368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.721 [2024-07-23 03:34:03.658396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.721 qpair failed and we were unable to recover it. 00:34:37.721 [2024-07-23 03:34:03.658596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.721 [2024-07-23 03:34:03.658644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.721 qpair failed and we were unable to recover it. 00:34:37.721 [2024-07-23 03:34:03.658798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.721 [2024-07-23 03:34:03.658824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.721 qpair failed and we were unable to recover it. 00:34:37.721 [2024-07-23 03:34:03.659031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.721 [2024-07-23 03:34:03.659073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.721 qpair failed and we were unable to recover it. 00:34:37.721 [2024-07-23 03:34:03.659407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.659464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.659664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.659692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.659846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.659872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.660021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.660046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.660230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.660255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.660543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.660568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.660738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.660764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.660915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.660940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.661149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.661178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.661374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.661416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.661610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.661641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.661788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.661813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.661965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.661992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.662145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.662171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.662425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.662455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.662629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.662656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.662824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.662849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.663213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.663265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.663499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.663529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.663719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.663745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.663916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.663942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.664079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.664121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.664383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.664426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.664642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.664669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.664814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.664839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.665043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.665072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.665267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.665293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.665463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.665507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.665718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.665744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.665932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.665961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.666154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.666181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.666353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.666379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.666560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.666586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.666753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.666793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.666983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.667010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.667224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.667269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.667469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.667498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.722 [2024-07-23 03:34:03.667686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.722 [2024-07-23 03:34:03.667712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.722 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.667885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.667913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.668122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.668148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.668349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.668381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.668586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.668632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.668808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.668834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.669024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.669049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.669239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.669265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.669428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.669453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.669599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.669633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.669832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.669857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.670011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.670037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.670221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.670263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.670452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.670478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.670626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.670652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.670825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.670851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.671040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.671083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.671325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.671351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.671488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.671514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.671688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.671714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.671861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.671888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.672050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.672093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.672306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.672332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.672474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.672500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.672679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.672705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.672852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.672877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.673020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.673046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.673198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.673223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.673357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.673383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.673529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.673556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.673737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.673777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.673987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.674014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.674192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.674217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.674384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.674410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.674560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.674585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.674761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.674800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.674983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.675011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.675204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.723 [2024-07-23 03:34:03.675249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.723 qpair failed and we were unable to recover it. 00:34:37.723 [2024-07-23 03:34:03.675465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.675490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.675682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.675708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.675922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.675951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.676139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.676166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.676347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.676374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.676542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.676572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.676724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.676753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.676918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.676958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.677108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.677140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.677309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.677335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.677498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.677524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.677684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.677710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.677858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.677901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.678123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.678149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.678318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.678343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.678552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.678580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.678733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.678760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.678933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.678959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.679162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.679187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.679342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.679368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.679563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.679589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.679773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.679800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.679938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.679965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.680160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.680186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.680367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.680392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.680558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.680584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.680741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.680767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.680919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.680947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.681142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.681171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.681360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.681386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.681559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.681585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.681750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.681788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.681966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.681993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.682131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.682157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.682329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.682354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.682529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.682555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.682737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.682762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.724 [2024-07-23 03:34:03.682932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.724 [2024-07-23 03:34:03.682957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.724 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.683129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.683154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.683327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.683352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.683545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.683571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.683776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.683802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.683956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.683981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.684194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.684219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.684382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.684408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.684603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.684635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.684818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.684845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.685011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.685037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.685231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.685256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.685404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.685429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.685625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.685651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.685824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.685849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.686046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.686075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.686272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.686297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.686448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.686473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.686682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.686722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.686922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.686969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.687241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.687267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.687457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.687483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.687670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.687700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.687904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.687933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.688177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.688221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.688419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.688445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.688619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.688645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.688820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.688845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.689053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.689078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.689224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.689250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.689445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.689470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.725 qpair failed and we were unable to recover it. 00:34:37.725 [2024-07-23 03:34:03.689648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.725 [2024-07-23 03:34:03.689675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.689817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.689844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.690046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.690072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.690236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.690262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.690426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.690457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.690673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.690717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.690885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.690929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.691066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.691093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.691393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.691431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.691583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.691609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.691764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.691790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.691988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.692016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.692208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.692233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.692427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.692453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.692633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.692662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.692801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.692828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.693003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.693029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.693260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.693286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.693462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.693489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.693632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.693658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.693832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.693858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.694028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.694054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.694222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.694249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.694400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.694428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.694599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.694630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.694771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.694796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.694994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.695022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.695176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.695205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.695390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.695418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.695583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.695608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.695756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.695782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.695923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.695968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.696162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.696187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.696331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.696358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.696531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.696556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.696735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.696763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.696910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.696936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.726 [2024-07-23 03:34:03.697075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.726 [2024-07-23 03:34:03.697100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.726 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.697245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.697271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.697411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.697438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.697591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.697626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.697796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.697822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.697984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.698009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.698180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.698207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.698401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.698427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.698574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.698600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.698771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.698798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.698964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.698990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.699158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.699185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.699354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.699381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.699553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.699581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.699764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.699790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.699939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.699964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.700113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.700154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.700382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.700407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.700594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.700627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.700814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.700840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.701009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.701034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.701205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.701234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.701410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.701436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.701627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.701653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.701820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.701845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.702098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.702123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.702290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.702315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.702486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.702511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.702684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.702710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.702844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.702869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.703016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.703041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.703234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.703259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.703444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.703470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.703638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.703664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.703834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.703859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.704034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.704060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.704202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.704229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.727 [2024-07-23 03:34:03.704393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.727 [2024-07-23 03:34:03.704421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.727 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.704657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.704683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.704824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.704849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.705014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.705039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.705199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.705228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.705420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.705445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.705621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.705647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.705817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.705842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.705980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.706005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.706176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.706202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.706378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.706403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.706627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.706673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.706821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.706846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.707031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.707059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.707256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.707282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.707441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.707467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.707641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.707666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.707838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.707863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.708064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.708089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.708286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.708311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.708470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.708498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.708711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.708737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.708882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.708907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.709102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.709127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.709297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.709322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.709520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.709546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.709751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.709777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.709947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.709972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.710138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.710166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.710381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.710409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.710600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.710630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.710800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.710825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.711052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.711077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.711243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.711268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.711466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.711491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.711658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.711683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.711826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.711852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.712013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.728 [2024-07-23 03:34:03.712041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.728 qpair failed and we were unable to recover it. 00:34:37.728 [2024-07-23 03:34:03.712265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.712319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.712508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.712536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.712729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.712756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.712951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.712979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.713171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.713197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.713366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.713392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.713560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.713585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.713734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.713760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.713932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.713957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.714102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.714129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.714324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.714350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.714519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.714544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.714723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.714749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.714920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.714945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.715132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.715160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.715341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.715369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.715594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.715626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.715766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.715791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.715936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.715961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.716137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.716162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.716360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.716386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.716557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.716581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.716767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.716794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.716955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.716980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.717128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.717153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.717318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.717343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.717512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.717537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.717683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.717727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.717929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.717955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.718103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.718128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.718301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.718327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.718524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.718549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.718695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.718721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.718899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.718924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.719097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.719122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.719292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.719317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.719490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.719515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.729 qpair failed and we were unable to recover it. 00:34:37.729 [2024-07-23 03:34:03.719699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.729 [2024-07-23 03:34:03.719724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.719918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.719943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.720131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.720158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.720322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.720346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.720486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.720511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.720699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.720725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.720920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.720945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.721141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.721170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.721361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.721385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.721521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.721546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.721727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.721753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.721923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.721948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.722121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.722146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.722283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.722323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.722544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.722569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.722774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.722800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.722964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.722989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.723184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.723209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.723353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.723379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.723546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.723571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.723722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.723747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.723888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.723912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.724076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.724117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.724306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.724334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.724523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.724548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.724733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.724762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.724942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.724970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.725137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.725163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.725355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.725380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.725574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.725599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.725779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.725804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.726002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.726031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.726203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.726228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.726378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.730 [2024-07-23 03:34:03.726403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.730 qpair failed and we were unable to recover it. 00:34:37.730 [2024-07-23 03:34:03.726543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.726568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.726741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.726766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.726917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.726942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.727114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.727139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.727337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.727362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.727506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.727531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.727684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.727710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.727852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.727878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.728051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.728077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.728292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.728320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.728490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.728514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.728716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.728741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.728885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.728927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.729103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.729129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.729301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.729327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.729525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.729550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.729699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.729724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.729898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.729923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.730093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.730119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.730294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.730322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.730516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.730541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.730728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.730753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.730951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.730980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.731145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.731171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.731311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.731356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.731532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.731574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.731803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.731828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.732023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.732051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.732273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.732299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.732470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.732495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.732646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.732673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.732872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.732897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.733096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.733121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.733286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.733315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.733483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.733511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.733703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.733728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.733918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.733943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.731 [2024-07-23 03:34:03.734180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.731 [2024-07-23 03:34:03.734205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.731 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.734352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.734377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.734569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.734598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.734777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.734802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.734980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.735005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.735147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.735191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.735383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.735408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.735573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.735598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.735778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.735820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.736014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.736039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.736206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.736231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.736428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.736454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.736638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.736667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.736868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.736894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.737064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.737089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.737266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.737294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.737474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.737547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.737768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.737793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.737943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.737968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.738130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.738155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.738368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.738396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.738574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.738602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.738810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.738835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.738981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.739006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.739145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.739170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.739404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.739429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.739634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.739659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.739799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.739824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.740028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.740054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.740250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.740278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.740473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.740498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.740636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.740662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.740851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.740879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.741061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.741088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.741288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.741313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.741451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.741477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.741649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.741675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.741813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.741838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.732 qpair failed and we were unable to recover it. 00:34:37.732 [2024-07-23 03:34:03.741982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.732 [2024-07-23 03:34:03.742025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.742221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.742246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.742415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.742440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.742610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.742641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.742790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.742815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.743015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.743040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.743232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.743259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.743454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.743480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.743652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.743679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.743843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.743871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.744028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.744056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.744240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.744265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.744480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.744508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.744670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.744699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.744892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.744917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.745084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.745108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.745288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.745316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.745514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.745543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.745745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.745770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.745919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.745959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.746180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.746205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.746389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.746417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.746566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.746594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.746797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.746822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.746967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.746992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.747133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.747158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.747330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.747355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.747545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.747573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.747729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.747755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.747903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.747929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.748113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.748142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.748361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.748389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.748572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.748597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.748775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.748801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.749016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.749044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.749234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.749259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.749455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.749483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.733 [2024-07-23 03:34:03.749650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.733 [2024-07-23 03:34:03.749679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.733 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.749874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.749899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.750084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.750112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.750320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.750348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.750569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.750594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.750745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.750770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.750958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.750986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.751171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.751200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.751360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.751388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.751548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.751575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.751776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.751802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.751993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.752018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.752190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.752218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.752402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.752430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.752581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.752609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.752813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.752838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.753006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.753031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.753200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.753228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.753379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.753408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.753626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.753653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.753809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.753836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.754024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.754052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.754245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.754270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.754449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.754477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.754665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.754694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.754883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.754909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.755096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.755124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.755309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.755337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.755526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.755551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.755767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.755795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.755951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.755979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.756140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.756165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.756347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-23 03:34:03.756375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-23 03:34:03.756594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.756628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.756823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.756852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.756991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.757035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.757222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.757250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.757431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.757456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.757646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.757675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.757824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.757852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.758071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.758096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.758277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.758302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.758514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.758541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.758702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.758728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.758876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.758919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.759098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.759123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.759268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.759293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.759438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.759464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.759638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.759664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.759817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.759843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.760009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.760037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.760195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.760223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.760411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.760436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.760585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.760610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.760797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.760823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.761027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.761053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.761216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.761243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.761433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.761461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.761660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.761686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.761850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.761875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.762071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.762100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.762268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.762293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.762436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.762461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.762658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.762686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.762854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.762879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.763046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.763071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.763262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.763290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.763460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.763485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.763675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.763703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.763903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-23 03:34:03.763928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-23 03:34:03.764097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.764122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.764298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.764323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.764488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.764517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.764676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.764702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.764916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.764944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.765105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.765136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.765330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.765355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.765539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.765566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.765757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.765782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.765974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.765999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.766182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.766207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.766394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.766422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.766639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.766665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.766832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.766860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.767010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.767038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.767227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.767252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.767424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.767449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.767666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.767694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.767860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.767885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.768077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.768106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.768333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.768359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.768555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.768581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.768747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.768773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.768993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.769021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.769188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.769213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.769379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.769404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.769541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.769566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.769739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.769765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.769958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.769987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.770202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.770230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.770484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.770537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.770730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.770756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.770898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.770927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.771100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.771125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.771311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.771339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.771573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.771601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.771821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-23 03:34:03.771846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-23 03:34:03.772011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.772038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.772255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.772281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.772472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.772499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.772699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.772725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.772896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.772921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.773086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.773110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.773296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.773324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.773508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.773535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.773754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.773780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.773940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.773968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.774130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.774157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.774336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.774364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.774517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.774547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.774737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.774762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.774900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.774925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.775141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.775169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.775381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.775408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.775569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.775594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.775749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.775775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.775950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.775975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.776146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.776171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.776335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.776360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.776550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.776582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.776753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.776778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.776945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.776987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.777159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.777184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.777322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.777347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.777533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.777560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.777746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.777774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.777961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.777987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.778159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.778184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.778372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.778401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.778594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.778625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.778824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.778853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.779006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.779035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.779188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.779213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.779370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.779395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.779564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-23 03:34:03.779589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-23 03:34:03.779758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.779784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.779929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.779954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.780122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.780147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.780343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.780367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.780537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.780565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.780721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.780750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.780967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.780992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.781206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.781234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.781413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.781438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.781606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.781637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.781827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.781855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.782017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.782045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.782221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.782246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.782460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.782488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.782696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.782721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.782882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.782907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.783089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.783117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.783265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.783293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.783479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.783504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.783689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.783717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.783871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.783898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.784059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.784084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.784268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.784296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.784447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.784475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.784703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.784728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.784924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.784952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.785141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.785166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.785335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.785360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.785545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.785574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.785768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.785797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.785993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.786019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.786235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.786263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.786452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.786479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.786682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.786708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.786851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.786876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.787094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.787122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.787308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.787335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-23 03:34:03.787511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-23 03:34:03.787535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.787738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.787764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.787962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.787990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.788170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.788198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.788411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.788439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.788608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.788644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.788857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.788885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.789073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.789101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.789255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.789285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.789475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.789500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.789718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.789747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.789902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.789930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.790144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.790169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.790363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.790388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.790552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.790577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.790770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.790803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.790989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.791014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.791209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.791233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.791422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.791449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.791636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.791666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.791826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.791854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.792017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.792042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.792220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.792248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.792461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.792486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.792678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.792706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.792898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.792923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.793116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.793141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.793359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.793384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.793569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.793598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.793771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.793797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.794009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.794037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-23 03:34:03.794218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-23 03:34:03.794246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.794436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.794464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.794708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.794734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.794880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.794905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.795076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.795101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.795271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.795298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.795513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.795538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.795703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.795729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.795916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.795944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.796108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.796135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.796323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.796348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.796560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.796592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.796788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.796813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.797005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.797032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.797221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.797245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.797432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.797459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.797641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.797669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.797880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.797908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.798097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.798122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.798306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.798334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.798544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.798572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.798771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.798799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.799021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.799046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.799210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.799238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.799396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.799425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.799623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.799652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.799845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.799870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.800072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.800100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.800285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.800313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.800465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.800492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.800674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.800699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.800856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.800884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.801059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.801086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.801244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.801272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.801432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.801457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.801643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.801671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.801869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.801894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.802038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.802063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-23 03:34:03.802228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-23 03:34:03.802257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.802446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.802473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.802679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.802706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.802912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.802940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.803132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.803157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.803343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.803371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.803529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.803557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.803758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.803784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.803959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.803985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.804150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.804177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.804380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.804405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.804565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.804590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.804740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.804766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.804950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.804980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.805108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2187390 is same with the state(5) to be set 00:34:37.741 [2024-07-23 03:34:03.805344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.805387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.805589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.805625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.805774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.805800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.805976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.806001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.806167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.806193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.806463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.806515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.806700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.806731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.806929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.806956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.807230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.807282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.807474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.807502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.807667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.807694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.807904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.807940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.808217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.808270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.808497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.808523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.808693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.808720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.808916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.808944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.809122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.809149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.809302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.809327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.809515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.809545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.809742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.809768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.810000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.810049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.810335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.810385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.810574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.810599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-23 03:34:03.810790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-23 03:34:03.810829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.811079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.811117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.811296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.811323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.811544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.811619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.811850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.811876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.812022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.812047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.812215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.812240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.812536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.812589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.812760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.812786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.812971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.812998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.813333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.813380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.813552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.813577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.813727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.813753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.813972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.814000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.814164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.814189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.814457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.814508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.814663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.814692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.814892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.814917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.815060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.815085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.815237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.815262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.815457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.815483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.815653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.815694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.815870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.815895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.816126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.816151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.816365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.816419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.816579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.816627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.816841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.816866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.817009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.817034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.817198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.817223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.817413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.817438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.817623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.817656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.817815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.817842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.818010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.818035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.818224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.818252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.818446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.818473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.818672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.818698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.818872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.818897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.819176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-23 03:34:03.819229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-23 03:34:03.819425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.819452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.819606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.819654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.819823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.819848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.820002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.820027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.820189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.820214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.820542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.820595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.820829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.820854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.821052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.821082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.821400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.821453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.821646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.821672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.821841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.821867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.822068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.822096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.822255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.822280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.822519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.822569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.822741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.822767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.822960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.822985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.823267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.823317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.823500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.823528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.823728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.823753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.823911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.823939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.824165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.824191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.824363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.824388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.824549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.824577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.824745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.824771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.824907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.824932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.825115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.825143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.825400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.825425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.825597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.825627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.825801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.825826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.826056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.826081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.826269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.826294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.826533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.826561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.826755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.826781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.826970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.827009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.827305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.827357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.827672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.827699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.827872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.827897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-23 03:34:03.828069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-23 03:34:03.828094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.828288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.828332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.828565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.828622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.828794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.828819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.828989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.829015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.829284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.829332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.829677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.829703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.829847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.829872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.830068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.830115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.830317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.830364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.830538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.830565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.830760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.830804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.830990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.831017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.831232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.831275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.831450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.831476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.831660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.831691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.831864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.831908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.832096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.832139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.832283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.832310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.832517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.832542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.832744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.832776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.832962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.832991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.833174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.833203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.833390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.833418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.833606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.833639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.833847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.833875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.834126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.834176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.834341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.834369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.834530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.834558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.834745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.834771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.834991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.835019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.835214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.835242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.835404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-23 03:34:03.835432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-23 03:34:03.835607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.835638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.835790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.835815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.835978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.836006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.836189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.836221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.836407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.836435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.836638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.836694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.836868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.836911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.837136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.837179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.837473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.837526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.837724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.837750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.837949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.837993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.838156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.838198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.838427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.838471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.838643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.838669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.838835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.838877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.839046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.839090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.839275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.839316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.839467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.839494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.839656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.839686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.839888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.839932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.840093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.840135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.840308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.840335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.840514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.840539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.840702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.840744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.840904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.840947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.841177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.841221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.841393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.841418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.841562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.841587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.841793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.841835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.842027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.842056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.842295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.842338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.842482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.842508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.842728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.842772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.842951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.842995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.843189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.843218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.843433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.843459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.843604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.843638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-23 03:34:03.843842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-23 03:34:03.843889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.844129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.844160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.844314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.844343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.844504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.844533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.844744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.844773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.844939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.844967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.845127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.845155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.845392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.845436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.845607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.845638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.845827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.845871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.846138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.846188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.846393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.846436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.846635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.846661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.846854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.846900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.847091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.847134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.847298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.847342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.847519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.847546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.847725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.847756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.847972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.848001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.848299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.848353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.848551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.848577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.848751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.848779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.848969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.848997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.849162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.849190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.849398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.849426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.849606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.849638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.849787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.849813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.850045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.850073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.850285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.850338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.850555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.850582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.850786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.850812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.850981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.851009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.851180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.851220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.851401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.851433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.851628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-23 03:34:03.851654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-23 03:34:03.851792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.851817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.852001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.852030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.852347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.852398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.852598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.852635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.852775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.852800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.852989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.853017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.853181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.853210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.853392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.853420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.853633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.853659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.853855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.853880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.854128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.854155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.854341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.854369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.854573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.854599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.854766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.854791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.854983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.855008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.855194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.855222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.855433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.855461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.855675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.855701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.855916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.855945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.856134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.856162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.856371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.856427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.856611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.856658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.856841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.856866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.857052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.857078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.857270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.857299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.857516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.857544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.857742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.857767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.857954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.857982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.858189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.858217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.858404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.858462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.858697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.858723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.858909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.858938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.859138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.859163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.859355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.859384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.859571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.859601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.859827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-23 03:34:03.859854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-23 03:34:03.860052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.860080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.860266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.860294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.860480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.860508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.860702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.860733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.860923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.860952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.861140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.861165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.861363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.861390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.861569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.861597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.861771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.861796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.861937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.861962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.862148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.862177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.862365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.862394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.862554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.862582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.862774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.862800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.862968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.862993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.863189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.863217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.863429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.863457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.863621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.863646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.863840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.863865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.864077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.864105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.864281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.864309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.864519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.864547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.864714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.864739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.864904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.864929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.865123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.865151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.865337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.865365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.865567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.865595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.865769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.865795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.866002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.866031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.866244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-23 03:34:03.866269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-23 03:34:03.866428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.866463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.866654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.866684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.866850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.866875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.867015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.867057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.867254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.867280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.867449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.867474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.867659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.867688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.867899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.867928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.868113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.868138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.868282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.868308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.868455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.868480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.868670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.868696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.868862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.868887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.869076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.869104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.869298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.869324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.869485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.869513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.869708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.869733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.869909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.869933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.870095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.870123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-23 03:34:03.870304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-23 03:34:03.870332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.870518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.870543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.870687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.870729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.870942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.870970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.871130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.871157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.871322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.871364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.871572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.871600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.871880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.871907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.872055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.872084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.872268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.872296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.872489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.872514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.872703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.872732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.872914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.872942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.873103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.873129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.873326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.873354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.873561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.873589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.873794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.873819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.874007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.874036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.874214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.874242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.874426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.874451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.874633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.874676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.874846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.874871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.875084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.875109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.875266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.875293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.875502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.875530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.875702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.875728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.875925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-23 03:34:03.875951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-23 03:34:03.876196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.876222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.876389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.876414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.876633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.876661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.876843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.876872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.877055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.877080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.877270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.877300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.877471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.877496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.877668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.877695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.877918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.877946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.878106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.878135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.878305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.878330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.878478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.878503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.878685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.878726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.878883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.878908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.879078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.879122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.879334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.879362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.879576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.879601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.879829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.879857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.880016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.880044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.880211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.880236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.880420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.880445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.880632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.880659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.880843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.880882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.881063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.881090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.881257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.881300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.881504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.881530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.881683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.881711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.881882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.881910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.882056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.882083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.882276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.882319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.882470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.882496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.882687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.882733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.882924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.882967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.883128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.883171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.883431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.883481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-23 03:34:03.883627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-23 03:34:03.883660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.883837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.883880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.884024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.884051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.884195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.884222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.884418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.884444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.884636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.884664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.884831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.884860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.885071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.885100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.885360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.885412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.885574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.885599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.885800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.885825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.885962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.885988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.886180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.886208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.886367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.886395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.886593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.886624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.886799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.886824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.887021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.887047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.887290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.887338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.887558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.887586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.887753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.887778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.887977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.888002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.888227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.888255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.888438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.888466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.888656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.888698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.888842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.888868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.889069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.889097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.889314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.889342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.889527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.889555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.889731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.889757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.889919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.889944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.890130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.890157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.890378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.890405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.890589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.890621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.890770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.890796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.890935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.890960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.891226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.891276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.891490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.891518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.891718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.891744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-23 03:34:03.891938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-23 03:34:03.891966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.892179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.892207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.892383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.892411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.892584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.892628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.892786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.892814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.893015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.893059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.893280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.893323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.893519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.893548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.893706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.893733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.893953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.893996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.894195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.894223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.894439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.894481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.894681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.894708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.894929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.894973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.895166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.895209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.895412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.895439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.895611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.895646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.895816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.895842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.896041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.896084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.896276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.896318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.896464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.896489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.896704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.896747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.896934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.896962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.897115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.897143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.897434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.897494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.897700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.897726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-23 03:34:03.897871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-23 03:34:03.897896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.898096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.898124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.898302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.898330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.898519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.898548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.898732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.898758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.898903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.898929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.899110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.899139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.899412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.899463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.899667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.899693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.899859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.899885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.900208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.900271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.900456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.900485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.900687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.900712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.900858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.900883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.901077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.901105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.901292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.901320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.901512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.901540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.901726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.901756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.901949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.901977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.902161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.902189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.902375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.902403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.902565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.902594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.902786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.902812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.902976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.903004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.903187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.903215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.903472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.903497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-23 03:34:03.903642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-23 03:34:03.903668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.903834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.903859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.904105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.904152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.904338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.904366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.904555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.904583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.904807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.904846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.905017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.905061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.905288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.905331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.905498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.905523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.905701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.905728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.905916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.905960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.906296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.906353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.906508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.906535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.906727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.906771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.906946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.906994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.907215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.907258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.907402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.907428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.907604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.907636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.907833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.907863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-23 03:34:03.908029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-23 03:34:03.908072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.908333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.908385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.908534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.908559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.908724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.908750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.908911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.908954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.909132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.909179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.909408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.909450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.909644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.909671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.909844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.909887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.910106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.910150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.910399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.910448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.910633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.910659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.910899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.910943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.911121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.911164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.911337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.911381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.911517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.911542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.911695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.911723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.911899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.911942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.912131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.912176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.912394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.912420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.912599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.912631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.912821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.912867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.913064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.913108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.913281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.913326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.913493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.913519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.913698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.913742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-23 03:34:03.913917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-23 03:34:03.913965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.914149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.914192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.914394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.914420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.914593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.914625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.914809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.914836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.915031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.915074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.915268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.915312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.915480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.915505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.915714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.915757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.915942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.915972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.916154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.916182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.916397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.916425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.916575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.916604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.916772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.916802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.917003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.917031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.917224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.917252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.917403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.917432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.917643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.917670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.917837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.917862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.918082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.918110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.918322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.918350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.918546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.918575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.918772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.918798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.918992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.919017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.919245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.919283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.919441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.919470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-23 03:34:03.919638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-23 03:34:03.919664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.919832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.919857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.920021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.920050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.920256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.920313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.920497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.920525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.920694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.920720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.920891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.920916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.921080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.921108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.921269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.921297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.921486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.921514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.921718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.921744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.921929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.921957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.922184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.922243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.922431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.922459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.922623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.922666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.922819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.922844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.923035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.923065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.923255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.923283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.923489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.923517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.923691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.923717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.923903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.923931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.924101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.924126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.924308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.924336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.924526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.924554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.924752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.924778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.924941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.924969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-23 03:34:03.925110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-23 03:34:03.925138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.925328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.925355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.925578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.925610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.925814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.925840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.926004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.926029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.926196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.926239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.926436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.926464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.926685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.926710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.926844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.926893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.927087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.927116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.927276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.927317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.927502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.927530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.927723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.927748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.927924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.927949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.928176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.928204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.928393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.928421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.928588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.928618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.928817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.928843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.929013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.929040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.929273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.929324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.929510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.929538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.929735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.929761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.929904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.929929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.930116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.930143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.930362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.930389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.930585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.930610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-23 03:34:03.930783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-23 03:34:03.930808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.931004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.931032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.931246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.931271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.931490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.931522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.931710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.931738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.931907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.931933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.932096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.932138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.932296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.932325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.932530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.932555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.932704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.932730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.932943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.932972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.933159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.933185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.933401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.933428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.933579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.933607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.933812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.933837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.933994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.934022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.934211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.934239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.934464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.934489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.934641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.934667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.934821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.934864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.935079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.935104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.935250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.935275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.935413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.935438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.935627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.935655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.935832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.935857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.936021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-23 03:34:03.936046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-23 03:34:03.936192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.936217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.936355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.936380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.936555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.936580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.936756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.936781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.936932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.936965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.937129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.937157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.937380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.937405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.937566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.937594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.937792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.937821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.938006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.938031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.938205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.938229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.938371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.938397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.938536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.938561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.938703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.938735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.938925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.938953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.939119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.939144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.939356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.939384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.939543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.939571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.939754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.939779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.939970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.939998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.940171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.940197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.940382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.940408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.940607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-23 03:34:03.940658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-23 03:34:03.940830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.940856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.941030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.941054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.941245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.941270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.941424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.941448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.941630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.941656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.941827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.941856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.942065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.942093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.942285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.942310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.942529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.942557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.942758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.942783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.942933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.942958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.943147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.943175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.943341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.943372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.943587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.943620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.943821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.943849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.944055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.944080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.944223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.944249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.944445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.944473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.944653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.944679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.944825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.944850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.945012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.945041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.945203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.945231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.945428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.945453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.945623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.945658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.945871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.945899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.946058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.946084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.946267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.946295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.946481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.946509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.946738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.946764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.946957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.946982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.947175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-23 03:34:03.947203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-23 03:34:03.947420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.947445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.947592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.947625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.947842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.947871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.948042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.948067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.948237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.948262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.948418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.948444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.948685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.948710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.948879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.948922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.949110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.949138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.949307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.949332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.949516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.949544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.949699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.949728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.949892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.949918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.950104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.950132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.950357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.950385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.950570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.950595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.950742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.950767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.950953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.950981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.951151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.951181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.951360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.951386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.951541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.951571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.951802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.951828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.952001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.952026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.952163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.952190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.952359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.952386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.952552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.952580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.952767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.952792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.952968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.952994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.953207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.953236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.953402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.953427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.953564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.953590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.953793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.953819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.954038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.954066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.954231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.954257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.954444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.954486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-23 03:34:03.954687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-23 03:34:03.954713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.954888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.954913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.955128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.955156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.955332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.955360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.955546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.955572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.955714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.955740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.955882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.955929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.956121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.956147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.956312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.956340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.956509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.956537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.956712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.956743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.956925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.956954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.957109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.957138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.957339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.957364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.957527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.957555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.957740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.957768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.957969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.957994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.958185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.958213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.958391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.958419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.958616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.958642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.958850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.958878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.959021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.959049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.959244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.959270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.959469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.959520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.959699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.959728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.959920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.959945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.960108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.960136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.960324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.960352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.960556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.960584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.960789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.960815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.960975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.961001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.961143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.961168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.961333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.961375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.961530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.961558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.961738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.961775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.961910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.961934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.962157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.962185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-23 03:34:03.962379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-23 03:34:03.962409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.962571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.962599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.962798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.962824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.963001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.963026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.963224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.963253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.963414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.963442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.963669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.963695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.963853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.963880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.964064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.964092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.964253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.964278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.964477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.964505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.964685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.964711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.964880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.964906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.965122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.965174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.965357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.965382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.965553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.965582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.965762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.965788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.965960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.965988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.966216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.966258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.966444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.966472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.966664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.966690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.966878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.966903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.967051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.967076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.967249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.967274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.967422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.967449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.967600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.967630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.967817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.967847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.968038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.968064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.968256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.968284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.968436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.968464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.968656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.968682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.968827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.968852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.968988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.969013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.969178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.969203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.969390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.969418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.969601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.969636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.969827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.969853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.970009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.970036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.970252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-23 03:34:03.970280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-23 03:34:03.970470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.970498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.970667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.970693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.970865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.970911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.971073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.971098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.971289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.971317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.971470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.971498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.971688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.971714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.971879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.971903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.972108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.972132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.972272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.972297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.972496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.972524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.972704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.972729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.972901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.972926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.973139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.973179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.973348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.973377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.973578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.973606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.973788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.973814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.974040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.974067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.974418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.974467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.974673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.974698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.974851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.974876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.975045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.975070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.975236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.975261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.975432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.975457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.975604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.975634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.975780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.975822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.976009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.976036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.976194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.976219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.976372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.976397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.976588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.976637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.976815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.976840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.976995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.977020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.977168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.977209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.977384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.977409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.977625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.977666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.977842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-23 03:34:03.977867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-23 03:34:03.978059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.978085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.978245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.978272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.978467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.978509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.978666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.978690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.978837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.978862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.978999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.979024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.979192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.979218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.979363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.979388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.979534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.979577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.979793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.979819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.980013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.980041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.980205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.980232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.980423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.980463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.980674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.980699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.980869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.980911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.981066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.981091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.981273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.981301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.981465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.981492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.981654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.981680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.981855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.981897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.982109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.982140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.982357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.982385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.982576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.982602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.982820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.982847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.983021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.983046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.983209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.983236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.983402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.983430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.983639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.983665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.983812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.983837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.984040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.984068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.984298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.984323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.984493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.984518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.984681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.984707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.984882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.984907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.985063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.985087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.985245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.985270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.985409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.985434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.985572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.985597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.985808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.985833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.985989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.986014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-23 03:34:03.986200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-23 03:34:03.986228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.986407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.986435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.986623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.986648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.986796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.986821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.986991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.987020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.987210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.987235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.987386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.987410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.987625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.987669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.987876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.987902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.988090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.988118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.988302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.988327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.988497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.988522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.988679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.988704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.988856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.988898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.989108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.989133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.989314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.989339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.989529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.989558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.989738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.989764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.989909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.989933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.990128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.990170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.990340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.990365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.990550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.990575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.990752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.990777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.990918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.990942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.991087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.991113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.991318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.991346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.991525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.991552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.991732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.991758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.991940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.991983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.992149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.992174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.992351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.992375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.992603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.992637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.992800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.992824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.992972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.992998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.993202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.993242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.993413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.993437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.993611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.993658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.993831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.993856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.994039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.994064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.994217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.994242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.994416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.994443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-23 03:34:03.994618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-23 03:34:03.994643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.994817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.994842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.995014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.995039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.995204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.995229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.995405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.995432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.995591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.995626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.995795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.995822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.996005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.996034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.996200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.996225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.996436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.996460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.996676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.996704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.996867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.996909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.997080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.997105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.997253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.997279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.997432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.997456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.997655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.997690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.997859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.997885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.998053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.998081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.998299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.998324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.998493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.998521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.998730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.998757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.998904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.998929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.999119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.999147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.999359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.999387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.999560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.999585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.999754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.999779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:03.999927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:03.999953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.000152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.000177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.000370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.000398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.000586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.000619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.000800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.000825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.001018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.001043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.001221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.001247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.001416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.001442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.001646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.001676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.001824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.001848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.002030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.002055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.002196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.002242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.002424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.002453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.002652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.002677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.002827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.002854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.003056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.003084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.003282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.003308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-23 03:34:04.003488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-23 03:34:04.003514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.003696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.003724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.003947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.003971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.004110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.004135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.004304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.004329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.004529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.004557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.004812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.004838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.005026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.005055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.005281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.005306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.005474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.005499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.005705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.005734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.005945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.005970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.006118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.006143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.006316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.006341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.006509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.006533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.006686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.006711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.006902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.006930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.007125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.007151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.007317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.007346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.007551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.007576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.007754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.007780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.007988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.008016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.008166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.008196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.008414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.008440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.008590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.008619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.008846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.008873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.009040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.009065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.009235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.009260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.009446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.009475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.009657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.009682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.009846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.009874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.010090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.010119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.010317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.010342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.010524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.010549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.010743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.010772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.010959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.010985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.011167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.011195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.011383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.011411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.011620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.011648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.011846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-23 03:34:04.011871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-23 03:34:04.012045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.012070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.012244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.012270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.012410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.012435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.012605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.012646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.012885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.012910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.013151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.013176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.013342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.013367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.013539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.013564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.013739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.013765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.013926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.013954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.014120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.014145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.014315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.014341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.014516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.014541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.014765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.014790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.014979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.015007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.015191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.015218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.015389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.015414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.015576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.015604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.015764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.015792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.015959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.015988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.016179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.016206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.016390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.016419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.016588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.016630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.016853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.016881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.017071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.017096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.017265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.017289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.017462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.017487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.017632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.017658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.017801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.017826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.018036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.018063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.018222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.018250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.018441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.018467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.018632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.018658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.018868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.018895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.019058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.019083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.019278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.019306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.019468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-23 03:34:04.019496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-23 03:34:04.019682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.019708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.019855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.019881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.020049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.020077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.020260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.020285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.020468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.020496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.020679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.020707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.020889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.020915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.021056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.021081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.021253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.021278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.021427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.021456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.021677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.021706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.021863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.021891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.022078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.022103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.022250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.022276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.022442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.022468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.022662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.022688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.022850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.022875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.023044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.023073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.023284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.023309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.023501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.023531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.023716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.023746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.023917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.023942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.024111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.024136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.024305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.024331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.024505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.024530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.024693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.024721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.024906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.024935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.025128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.025153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.025325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.025350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.025536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.025563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.025755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-23 03:34:04.025781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-23 03:34:04.025941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.025966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.026151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.026179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.026367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.026392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.026595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.026628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.026816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.026844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.027029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.027058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.027232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.027258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.027395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.027421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.027606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.027639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.027828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.027854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.028062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.028087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.028232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.028257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.028447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.028475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.028696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.028722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.028910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.028936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.029148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.029177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.029385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.029412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.029641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.029667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.029858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.029886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.030073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.030103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.030284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.030310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.030450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.030475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.030644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.030672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.030841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.030866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.031034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.031059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.031272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.031297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.031465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.031490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.031676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.031705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.031905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.031933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.032112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.032138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.032288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.032312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.032485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.032510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.032675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.032700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.032893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.032921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.033110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.033138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.033298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.033324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.033492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-23 03:34:04.033516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-23 03:34:04.033716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.033741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.033904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.033929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.034120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.034147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.034316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.034341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.034548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.034574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.034747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.034771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.034918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.034943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.035110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.035136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.035287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.035312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.035484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.035508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.035667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.035694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.035889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.035917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.036105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.036133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.036343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.036368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.036558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.036585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.036788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.036816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.036983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.037008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.037181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.037207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.037373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.037398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.037542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.037566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.037733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.037760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.037912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.037937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.038072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.038097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.038291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.038319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.038498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.038523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.038701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.038726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.038915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.038943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.039120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.039147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.039340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.039365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.039548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.039576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.039813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.039839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.040007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.040033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.040177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.040201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.040386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.040414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.040631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.040656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.040796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.040820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.040985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-23 03:34:04.041013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-23 03:34:04.041161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.041188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.041360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.041385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.041555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.041580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.041840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.041866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.042059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.042087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.042297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.042324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.042515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.042542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.042741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.042766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.042912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.042937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.043129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.043155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.043320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.043349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.043542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.043568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.043737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.043765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.043986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.044013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.044198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.044225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.044433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.044458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.044630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.044656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.044831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.044857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.045027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.045054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.045220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.045249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.045449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.045474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.045643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.045669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.045844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.045870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.046015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.046039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.046211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.046236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.046424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.046452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.046641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.046675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.046850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.046876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.047059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.047087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.047288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.047312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-23 03:34:04.047519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-23 03:34:04.047544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.047716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.047742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.047933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.047961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.048153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.048180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.048322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.048364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.048575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.048602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.048789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.048814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.048975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.049001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.049163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.049188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.049323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.049347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.049496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.049539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.049729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.049755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.049902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.049928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.050077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.050119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.050296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.050321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.050484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.050510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.050697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.050727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.050946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.050970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.051163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.051188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.051333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.051375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.051592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.051626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.051836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.051861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.052059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.052084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.052287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.052312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.052466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.052491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.052718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.052747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.052908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.052935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.053123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.053148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.053343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.053368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.053536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.053561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.053752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.053777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.053956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.053980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.054129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.054154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.054297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.054322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.054504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.054531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.054700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.054725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.054866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.054891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-23 03:34:04.055078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-23 03:34:04.055106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.055323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.055350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.055515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.055539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.055708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.055734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.055926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.055953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.056136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.056161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.056322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.056350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.056541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.056566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.056737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.056764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.056934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.056964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.057182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.057208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.057375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.057399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.057562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.057589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.057811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.057839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.058044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.058070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.058261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.058289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.058444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.058472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.058664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.058690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.058910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.058937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.059124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.059151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.059322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.059348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.059523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.059548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.059725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.059753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.059937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.059962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.060120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.060148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.060364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.060389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.060578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.060607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.060823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.060852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.061073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.061101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.061267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.061292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.061508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.061536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.061749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.061775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.061948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.061972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.062138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.062165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.062344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.062371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.062566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.062592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.062771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.062796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.062965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-23 03:34:04.062989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-23 03:34:04.063136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.063161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.063328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.063353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.063545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.063573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.063773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.063799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.063944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.063969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.064136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.064161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.064310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.064335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.064523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.064551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.064713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.064743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.064941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.064967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.065128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.065156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.065343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.065371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.065585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.065610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.065809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.065837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.066059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.066083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.066229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.066254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.066445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.066480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.066667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.066695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.066882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.066908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.067065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.067093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.067304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.067332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.067530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.067555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.067743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.067771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.067958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.067986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.068170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.068197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.068381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.068408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.068608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.068650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.068795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.068820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.069016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.069041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.069238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.069266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.069526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.069554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.069721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.069748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.069920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.069945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.070115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.070140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.070299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.070327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.070514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.070542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.070759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.070785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.070956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-23 03:34:04.070981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-23 03:34:04.071167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.071194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.071390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.071415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.071610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.071640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.071839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.071867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.072037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.072062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.072201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.072247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.072442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.072467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.072664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.072690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.072830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.072856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.073015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.073040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.073205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.073229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.073399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.073424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.073594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.073632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.073829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.073854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.074029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.074055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.074226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.074250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.074417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.074446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.074632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.074675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.074842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.074868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.075025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.075050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.075266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.075293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.075476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.075503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.075670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.075695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.075867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.075892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.076038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.076063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.076232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.076256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.076399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.076424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.076609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.076664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.076883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.076908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.077072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.077100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.077301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.077326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.077493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.077517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.077687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.077713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.077891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.077916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.078107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.078131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.078317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.078344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.078503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-23 03:34:04.078530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-23 03:34:04.078723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.078750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.078909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.078936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.079099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.079128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.079343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.079368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.079563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.079591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.079782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.079811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.079978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.080004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.080175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.080200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.080346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.080370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.080564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.080594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.080780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.080807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.080974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.080999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.081192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.081217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.081424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.081448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.081595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.081645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.081862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.081888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.082082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.082110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.082297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.082325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.082487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.082513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.082659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.082685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.082895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.082922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.083110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.083135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.083292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.083317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.083473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.083499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.083641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.083666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.083829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.083857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.084073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.084099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.084268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.084293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.084486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.084513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.084720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.084749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.084958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.084983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.085155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-23 03:34:04.085179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-23 03:34:04.085317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.085342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.085516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.085541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.085767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.085796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.086014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.086039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.086209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.086237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.086448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.086475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.086698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.086733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.086871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.086895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.087037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.087079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.087291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.087318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.087504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.087528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.087683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.087712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.087876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.087903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.088091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.088116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.088303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.088332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.088485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.088513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.088738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.088764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.088954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.088982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.089170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.089198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.089417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.089442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.089592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.089623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.089790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.089815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.089976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.090001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.090186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.090214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.090405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.090433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.090624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.090649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.090790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.090831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.091045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.091073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.091268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.091293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.091481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.091509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.091729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.091759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.091960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.091989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.092205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.092233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.092430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.092457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.092632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.092658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.092877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.092906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.093076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.093102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.093239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-23 03:34:04.093264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-23 03:34:04.093434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.093459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.093625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.093651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.093792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.093817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.094005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.094032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.094252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.094280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.094450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.094475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.094638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.094664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.094854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.094882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.095071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.095096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.095267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.095291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.095433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.095476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.095708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.095734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.095928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.095956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.096175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.096203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.096420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.096445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.096598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.096631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.096819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.096844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.097033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.097058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.097284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.097312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.097488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.097516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.097708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.097734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.097929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.097957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.098144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.098173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.098342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.098368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.098538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.098562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.098748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.098777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.098973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.099000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.099174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.099198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.099422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.099450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.099623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.099648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.099822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.099847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.100019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.100045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.100182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.100207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.100390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.100418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.100584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.100625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.100803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.100828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.100993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.101018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.101188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-23 03:34:04.101212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-23 03:34:04.101351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.101377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.101518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.101561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.101778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.101803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.101943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.101968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.102179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.102207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.102414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.102441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.102600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.102633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.102820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.102847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.103043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.103070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.103261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.103286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.103429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.103455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.103596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.103627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.103812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.103837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.103987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.104027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.104223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.104248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.104416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.104441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.104637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.104663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.104898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.104924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.105094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.105119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.105327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.105352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.105544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.105572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.105744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.105769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.105983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.106010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.106197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.106226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.106369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.106395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.106585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.106620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.106785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.106812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.107029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.107053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.107255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.107280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.107430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.107456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.107650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.107693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.107866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.107906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.108064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.108091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.108282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.108308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.108474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.108502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.108695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.108721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.108894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-23 03:34:04.108918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-23 03:34:04.109139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.109166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.109351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.109378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.109548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.109574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.109749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.109775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.109924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.109949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.110116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.110140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.110313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.110339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.110532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.110561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.110785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.110811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.110978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.111006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.111221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.111245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.111384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.111408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.111597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.111631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.111806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.111835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.112002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.112027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.112220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.112247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.112441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.112465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.112639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.112666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.112891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.112917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.113058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.113083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.113279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.113304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.113492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.113520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.113692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.113718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.113910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.113936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.114157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.114184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.114368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.114397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.114585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.114610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.114801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.114826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.115047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.115075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.115239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.115263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.115434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.115459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.115689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.115718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.115931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.115956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.116145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.116173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.116391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.116416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.116584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.116608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.116832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.116860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.117081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-23 03:34:04.117106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-23 03:34:04.117274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.117299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.117453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.117481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.117670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.117704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.117878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.117903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.118081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.118108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.118267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.118294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.118456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.118481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.118668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.118697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.118894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.118920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.119065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.119090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.119305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.119333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.119489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.119519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.119740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.119766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.119958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.119986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.120211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.120236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.120403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.120428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.120603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.120641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.120837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.120862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.121006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.121031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.121209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.121237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.121420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.121448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.121623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.121650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.121820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.121845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.122007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.122048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.122209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.122235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.122372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.122397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.122581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.122609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.122783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.122809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.123025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.123052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.123233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.123262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.123435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.123460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.123673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.123702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.785 qpair failed and we were unable to recover it. 00:34:37.785 [2024-07-23 03:34:04.123859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.785 [2024-07-23 03:34:04.123887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.124045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.124071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.124257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.124285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.124484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.124512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.124728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.124753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.124941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.124969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.125178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.125206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.125387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.125413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.125560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.125585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.125777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.125807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.125992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.126017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.126254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.126298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.126471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.126500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.126664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.126691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.126866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.126892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.127088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.127113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.127297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.127324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.127505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.127531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.127677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.127702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.127874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.127900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.128122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.128172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.128380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.128407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.128581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.128607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.128833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.128861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.129047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.129075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.129271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.129296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.129442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.129466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.129634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.129678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.129878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.129903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.130076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.130101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.130319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.130347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.130543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.130568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.130768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.130792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.130937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.130962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.131134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.131159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.131346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.131373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.131582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.131609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.131784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.786 [2024-07-23 03:34:04.131809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:37.786 qpair failed and we were unable to recover it. 00:34:37.786 [2024-07-23 03:34:04.131986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.132029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.132196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.132226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.132393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.132421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.132598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.132630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.132802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.132828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.133001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.133026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.133221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.133247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.133413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.133441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.133618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.133644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.133842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.133867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.134064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.134092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.134320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.134346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.134514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.134542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.134758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.134784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.134968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.134995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.135132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.135158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.135389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.135414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.135587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.135617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.135763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.135788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.135937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.135964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.136134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.136161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.136431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.136482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.136715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.136741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.136892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.136919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.137066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.137092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.137241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.137268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.137467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.137493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.137650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.137676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.137816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.137841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.138008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.138033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.138286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.138336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.138531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.138556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.138713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.138739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.138957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.138984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.139162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.139190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.139345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.139371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.139564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.787 [2024-07-23 03:34:04.139589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.787 qpair failed and we were unable to recover it. 00:34:37.787 [2024-07-23 03:34:04.139764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.139790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.139961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.139986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.140144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.140172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.140336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.140370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.140567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.140593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.140772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.140799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.140958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.140986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.141155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.141182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.141330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.141371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.141586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.141619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.141809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.141834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.142023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.142051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.142238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.142266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.142459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.142484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.142649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.142675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.142844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.142886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.143112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.143137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.143367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.143419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.143587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.143618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.143804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.143829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.144008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.144036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.144250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.144278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.144490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.144518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.144719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.144744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.144895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.144922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.145096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.145121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.145342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.145367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.145538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.145563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.145743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.145769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.145964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.145991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.146179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.146207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.146395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.146420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.146637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.146665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.146878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.146906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.147096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.147121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.147337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.147396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.147592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.147624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.788 [2024-07-23 03:34:04.147801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.788 [2024-07-23 03:34:04.147826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.788 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.147970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.147995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.148159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.148184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.148352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.148377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.148514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.148541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.148709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.148736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.148929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.148959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.149155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.149183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.149384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.149409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.149573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.149602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.149802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.149827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.150011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.150039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.150204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.150229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.150394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.150420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.150561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.150586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.150756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.150782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.150964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.150992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.151205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.151233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.151430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.151455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.151628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.151655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.151804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.151831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.152027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.152052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.152249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.152277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.152460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.152488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.152687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.152713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.152879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.152908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.153096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.153121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.153290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.153316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.153508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.153533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.153732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.153762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.153931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.153956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.154128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.154171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.154395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.154420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.789 qpair failed and we were unable to recover it. 00:34:37.789 [2024-07-23 03:34:04.154565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.789 [2024-07-23 03:34:04.154590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.154771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.154797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.155017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.155045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.155234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.155260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.155473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.155501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.155701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.155728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.155923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.155949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.156090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.156115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.156260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.156285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.156450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.156475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.156660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.156689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.156876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.156905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.157118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.157143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.157332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.157364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.157525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.157554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.157766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.157792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.157999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.158028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.158223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.158251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.158470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.158495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.158642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.158669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.158839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.158883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.159100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.159125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.159282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.159312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.159473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.159503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.159674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.159700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.159917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.159945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.160130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.160160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.160393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.160418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.160609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.160646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.160794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.160823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.161011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.161037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.161178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.161204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.161394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.161423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.161608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.161640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.161814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.161841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.162030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.790 [2024-07-23 03:34:04.162058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.790 qpair failed and we were unable to recover it. 00:34:37.790 [2024-07-23 03:34:04.162222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.162249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.162433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.162462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.162651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.162677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.162846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.162871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.163050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.163076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.163266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.163294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.163481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.163507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.163658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.163684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.163822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.163849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.164048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.164073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.164244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.164270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.164416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.164456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.164642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.164668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.164863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.164891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.165078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.165107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.165266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.165292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.165467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.165494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.165686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.165716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.165893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.165918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.166137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.166165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.166352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.166381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.166582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.166607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.166821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.166851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.167053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.167078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.167224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.167249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.167462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.167491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.167676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.167706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.167899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.167924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.168097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.168124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.168314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.168343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.168549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.168577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.168778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.168805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.168954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.168979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.169147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.169173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.169341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.169367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.169510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.169551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.169752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.791 [2024-07-23 03:34:04.169780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.791 qpair failed and we were unable to recover it. 00:34:37.791 [2024-07-23 03:34:04.169968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.169997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.170257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.170309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.170489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.170515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.170670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.170699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.170890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.170918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.171130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.171156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.171364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.171392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.171569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.171596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.171755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.171781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.171920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.171948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.172094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.172135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.172317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.172342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.172534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.172562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.172763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.172789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.172942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.172968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.173141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.173167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.173315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.173340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.173477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.173502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.173673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.173699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.173879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.173906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.174073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.174103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.174244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.174270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.174456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.174485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.174659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.174686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.174836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.174861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.175000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.175026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.175199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.175224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.175419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.175444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.175657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.175684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.175880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.175906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.176051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.176076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.176288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.176316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.176568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.176595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.176818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.176844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.177034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.177062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.177264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.177290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.792 [2024-07-23 03:34:04.177462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.792 [2024-07-23 03:34:04.177487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.792 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.177657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.177684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.177858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.177885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.178100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.178128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.178286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.178315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.178503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.178528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.178717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.178746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.178936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.178964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.179151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.179176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.179364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.179392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.179619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.179646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.179804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.179829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.180048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.180077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.180290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.180315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.180538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.180566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.180755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.180782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.180975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.181003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.181169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.181196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.181365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.181390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.181558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.181585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.181731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.181758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.181943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.181971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.182132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.182162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.182353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.182379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.182573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.182602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.182793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.182819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.183015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.183040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.183235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.183264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.183478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.183506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.183692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.183718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.183929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.183957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.184153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.184179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.184348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.184374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.184559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.184588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.184785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.184815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.184988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.185013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.185211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.185237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.185386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.793 [2024-07-23 03:34:04.185429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.793 qpair failed and we were unable to recover it. 00:34:37.793 [2024-07-23 03:34:04.185633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.185660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.185799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.185825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.185994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.186019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.186150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.186175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.186309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.186350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.186561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.186590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.186783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.186808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.186957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.186985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.187179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.187207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.187397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.187423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.187621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.187664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.187812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.187837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.188000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.188026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.188210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.188238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.188420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.188448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.188673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.188700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.188866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.188892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.189075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.189103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.189322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.189347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.189518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.189560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.189756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.189783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.189957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.189982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.190123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.190148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.190344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.190372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.190556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.190584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.190805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.190832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.191001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.191032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.191170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.191196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.191363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.191404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.191573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.191599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.191753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.794 [2024-07-23 03:34:04.191779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.794 qpair failed and we were unable to recover it. 00:34:37.794 [2024-07-23 03:34:04.191966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.191995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.192184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.192210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.192353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.192379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.192547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.192572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.192752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.192783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.192957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.192983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.193150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.193175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.193362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.193390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.193554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.193579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.193784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.193811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.193983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.194013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.194199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.194224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.194364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.194390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.194538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.194563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.194728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.194754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.194906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.194931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.195070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.195097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.195233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.195258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.195430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.195456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.195653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.195696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.195839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.195864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.196017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.196045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.196267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.196295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.196490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.196515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.196665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.196691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.196860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.196885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.197055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.197081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.197251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.197277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.197445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.197471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.197640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.197666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.197823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.197851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.198039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.198067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.198228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.198255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.198438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.198466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.198635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.198664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.198882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.198911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.199103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.199128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.795 qpair failed and we were unable to recover it. 00:34:37.795 [2024-07-23 03:34:04.199318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.795 [2024-07-23 03:34:04.199346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.199508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.199537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.199716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.199744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.199921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.199948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.200122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.200147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.200344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.200372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.200531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.200561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.200726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.200753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.200901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.200928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.201116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.201144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.201331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.201356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.201575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.201603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.201801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.201827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.201994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.202019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.202162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.202187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.202356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.202383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.202576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.202602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.202777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.202805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.202966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.202994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.203179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.203204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.203421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.203449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.203636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.203666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.203856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.203882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.204075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.204103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.204274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.204300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.204470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.204495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.204721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.204747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.204939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.204968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.205161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.205188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.205407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.205435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.205588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.205625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.205788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.205814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.206027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.206055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.206209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.206237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.206406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.206432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.206637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.206663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.206863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.206906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.796 [2024-07-23 03:34:04.207079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.796 [2024-07-23 03:34:04.207105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.796 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.207302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.207331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.207551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.207579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.207775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.207803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.207950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.207975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.208148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.208173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.208313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.208338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.208500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.208529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.208693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.208721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.208871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.208897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.209036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.209063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.209213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.209256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.209477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.209503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.209699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.209728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.209917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.209945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.210122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.210147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.210282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.210307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.210466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.210491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.210687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.210713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.210880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.210906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.211059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.211087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.211269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.211294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.211466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.211491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.211658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.211685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.211886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.211912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.212112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.212140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.212326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.212354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.212520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.212546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.212718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.212745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.212886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.212912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.213075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.213100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.213276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.213302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.213489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.213517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.213710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.213736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.213921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.213950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.214141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.214167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.214318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.214344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.797 [2024-07-23 03:34:04.214501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.797 [2024-07-23 03:34:04.214529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.797 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.214682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.214712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.214909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.214934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.215078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.215106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.215294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.215327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.215498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.215523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.215666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.215694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.215834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.215860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.216059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.216084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.216302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.216331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.216500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.216528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.216722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.216748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.216919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.216945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.217145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.217170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.217339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.217364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.217554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.217582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.217790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.217816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.217958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.217985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.218160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.218185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.218410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.218438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.218626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.218669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.218830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.218855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.219078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.219107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.219265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.219290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.219477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.219505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.219718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.219747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.219963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.219989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.220174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.220202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.220400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.220428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.220624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.220650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.220836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.220864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.221019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.221048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.221242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.221267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.221484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.221512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.221657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.221686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.221880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.221906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.222121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.222149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.222305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.798 [2024-07-23 03:34:04.222333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.798 qpair failed and we were unable to recover it. 00:34:37.798 [2024-07-23 03:34:04.222498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.222522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.222717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.222744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.222925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.222953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.223171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.223196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.223386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.223412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.223589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.223621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.223785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.223814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.223986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.224015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.224237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.224262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.224429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.224455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.224651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.224680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.224861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.224890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.225081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.225107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.225305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.225333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.225494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.225522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.225693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.225719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.225869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.225910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.226095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.226123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.226316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.226341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.226506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.226532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.226719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.226749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.226943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.226968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.227161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.227186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.227379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.227405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.227572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.227598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.227819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.227848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.228065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.228090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.228256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.228281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.228469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.228497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.228699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.228725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.228919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.228944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.229115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.799 [2024-07-23 03:34:04.229140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.799 qpair failed and we were unable to recover it. 00:34:37.799 [2024-07-23 03:34:04.229283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.229309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.229507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.229533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.229722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.229751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.229914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.229943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.230111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.230138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.230330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.230356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.230554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.230582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.230763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.230790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.230981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.231011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.231205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.231231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.231370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.231395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.231607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.231645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.231859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.231887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.232076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.232101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.232318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.232351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.232568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.232593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.232794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.232819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.233011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.233039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.233200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.233228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.233415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.233441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.233619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.233646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.233794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.233819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.233993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.234019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.234167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.234194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.234380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.234409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.234600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.234634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.234771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.234811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.235027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.235053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.235250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.235275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.235411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.235437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.235600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.235636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.235821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.235847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.235990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.236017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.236210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.236239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.236400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.236426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.236596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.236632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.236823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.800 [2024-07-23 03:34:04.236852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.800 qpair failed and we were unable to recover it. 00:34:37.800 [2024-07-23 03:34:04.237042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.237068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.237231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.237261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.237470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.237498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.237712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.237739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.237955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.237983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.238200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.238228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.238395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.238422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.238568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.238595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.238783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.238809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.238991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.239016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.239204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.239232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.239442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.239470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.239629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.239653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.239822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.239848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.240031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.240059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.240214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.240239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.240428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.240457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.240646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.240672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.240873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.240898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.241036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.241063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.241212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.241238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.241433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.241459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.241692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.241718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.241866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.241892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.242090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.242115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.242290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.242317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.242465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.242491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.242685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.242712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.242897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.242925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.243138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.243166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.243354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.243379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.243545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.243573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.243769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.243795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.243967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.243994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.244178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.244207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.244405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.244433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.244642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.801 [2024-07-23 03:34:04.244667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.801 qpair failed and we were unable to recover it. 00:34:37.801 [2024-07-23 03:34:04.244858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.244886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.245066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.245095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.245263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.245288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.245479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.245508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.245703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.245730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.245901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.245927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.246092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.246121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.246285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.246319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.246513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.246539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.246730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.246760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.246950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.246978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.247168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.247193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.247378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.247406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.247571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.247597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.247775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.247802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.247998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.248026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.248175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.248203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.248396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.248421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.248575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.248603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.248770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.248796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.248941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.248966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.249169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.249195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.249393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.249423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.249621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.249647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.249836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.249865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.250050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.250078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.250273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.250298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.250443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.250470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.250649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.250675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.250886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.250913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.251105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.251133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.251319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.251348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.251536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.251561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.251756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.251782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.251956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.251985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.252150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.252176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.802 [2024-07-23 03:34:04.252357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.802 [2024-07-23 03:34:04.252386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.802 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.252570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.252599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.252800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.252826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.253048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.253076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.253256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.253284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.253473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.253498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.253690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.253720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.253906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.253932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.254096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.254121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.254315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.254341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.254474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.254499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.254646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.254676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.254823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.254849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.255054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.255081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.255248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.255274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.255435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.255463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.255653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.255686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.255855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.255880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.256014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.256039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.256203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.256246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.256462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.256487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.256669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.256698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.256875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.256903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.257063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.257090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.257301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.257329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.257508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.257537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.257704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.257731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.257923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.257951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.258116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.258144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.258330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.258355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.258551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.258579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.258800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.258829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.259050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.259075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.259262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.259292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.259480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.259509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.259709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.259735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.259953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.259981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.803 qpair failed and we were unable to recover it. 00:34:37.803 [2024-07-23 03:34:04.260178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.803 [2024-07-23 03:34:04.260207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.260406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.260432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.260596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.260644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.260818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.260846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.261016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.261041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.261197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.261225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.261387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.261413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.261607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.261642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.261831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.261859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.262051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.262079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.262292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.262318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.262460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.262486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.262685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.262716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.262906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.262932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.263115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.263148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.263313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.263341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.263520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.263548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.263742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.263768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.263956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.263984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.264153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.264178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.264323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.264348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.264515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.264542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.264737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.264763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.264975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.265003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.265187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.265215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.265401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.265427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.265585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.265620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.265816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.265845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.266052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.266078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.266268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.266296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.266480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.266509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.266698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.266725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.266918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.804 [2024-07-23 03:34:04.266946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.804 qpair failed and we were unable to recover it. 00:34:37.804 [2024-07-23 03:34:04.267109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.267138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.267297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.267323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.267463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.267508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.267677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.267706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.267897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.267922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.268083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.268111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.268294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.268322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.268505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.268531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.268744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.268771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.268910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.268937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.269087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.269113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.269269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.269312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.269535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.269563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.269739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.269765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.269939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.269965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.270136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.270166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.270361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.805 [2024-07-23 03:34:04.270387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:37.805 qpair failed and we were unable to recover it. 00:34:37.805 [2024-07-23 03:34:04.270569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.270594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.270800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.270829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.271029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.271054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.271221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.271249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.271470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.271499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.271729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.271757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.271909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.271935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.272077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.272102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.272249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.272274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.272450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.272476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.272652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.272678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.272847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.272873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.273009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.273035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.273175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.273215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.273395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.273421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.273621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.273652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.273814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.273843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.274034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.274059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.274212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.274238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.274373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.274398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.274592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-23 03:34:04.274624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-23 03:34:04.274770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.274795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.274933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.274958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.275152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.275178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.275333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.275361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.275551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.275578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.275759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.275785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.275926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.275953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.276136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.276165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.276326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.276353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.276493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.276534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.276727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.276756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.276949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.276975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.277180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.277208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.277371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.277399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.277566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.277592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.277781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.277820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.277993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.278020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.278165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.278191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.278337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.278379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.278602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.278639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.278799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.278825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.278979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.279007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.279177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.279201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.279373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.279402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.279537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.279562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.279742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.279767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.279939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.279963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.280129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.280158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.280353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.280379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.280576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.280605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.280780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.280806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.280964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.281008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.281197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.281223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.281376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.281404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.281593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.281628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-23 03:34:04.281821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-23 03:34:04.281847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.281998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.282024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.282174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.282216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.282380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.282406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.282552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.282578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.282751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.282777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.282926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.282952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.283106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.283134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.283325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.283350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.283515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.283541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.283706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.283733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.283870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.283911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.284074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.284099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.284293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.284321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.284475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.284504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.284697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.284725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.284873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.284915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.285133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.285162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.285331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.285357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.285517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.285546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.285736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.285761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.285909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.285936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.286156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.286185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.286400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.286428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.286604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.286639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.286804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.286830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.287049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.287077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.287274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.287300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.287500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.287528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.287713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.287739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.287913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.287940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.288092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.288120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.288278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.288307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.288506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.288533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-23 03:34:04.288727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-23 03:34:04.288753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.288942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.288970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.289175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.289202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.289395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.289424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.289623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.289649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.289820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.289846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.290064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.290093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.290275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.290305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.290473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.290499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.290671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.290697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.290844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.290870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.291082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.291107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.291277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.291302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.291441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.291485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.291681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.291707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.291877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.291919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.292114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.292139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.292280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.292305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.292494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.292524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.292701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.292728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.292896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.292923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.293111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.293139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.293302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.293335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.293499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.293525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.293711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.293741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.293897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.293926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.294096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.294122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.294288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.294316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.294499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.294528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.294709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.294744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.294940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.294969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.295131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.295159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.295328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.295354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.295545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.295575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.295758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.295786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.295958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-23 03:34:04.295984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-23 03:34:04.296173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.296203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.296392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.296417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.296621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.296648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.296810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.296837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.297029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.297057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.297218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.297244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.297405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.297430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.297625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.297654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.297816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.297841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.297986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.298011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.298203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.298233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.298397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.298424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.298620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.298663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.298813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.298843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.298982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.299008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.299180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.299205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.299405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.299430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.299577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.299603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.299786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.299811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.299953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.299979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.300149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.300174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.300346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.300375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.300554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.300581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.300768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.300795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.300987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.301015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.301211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.301237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.301401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.301427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.301573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.301598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.301823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.301852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.302042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.302068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.302254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.302282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.302462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.302489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.302684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.302711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.302902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.302931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.303141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-23 03:34:04.303169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-23 03:34:04.303355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.303381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.303568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.303596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.303768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.303797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.304017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.304043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.304183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.304209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.304434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.304489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.304653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.304680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.304853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.304879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.305017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.305060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.305281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.305306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.305470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.305500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.305712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.305738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.305911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.305941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.306139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.306167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.306352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.306382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.306555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.306580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.306756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.306781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.306924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.306949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.307101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.307128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.307302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.307331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.307572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.307600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.307817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.307842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.308008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.308036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.308252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.308277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.308422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.308447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.308626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.308652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.308850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.308875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.309082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.309108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.309259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-23 03:34:04.309285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-23 03:34:04.309449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.309474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.309654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.309680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.309832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.309858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.310060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.310103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.310316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.310345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.310536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.310564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.310734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.310760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.310951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.310979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.311195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.311220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.311410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.311459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.311641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.311683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.311827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.311853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.312018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.312047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.312296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.312324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.312519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.312549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.312709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.312735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.312931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.312957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.313154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.313205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.313365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.313392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.313591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.313621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.313794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.313820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.314048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.314076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.314284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.314312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.314475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.314502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.314690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.314716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.314881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.314932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.315173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.315201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.315380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.315409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.315696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.315721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.315875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.315901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.316099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.316127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.316318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.316346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.316557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.316585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.316776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-23 03:34:04.316801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-23 03:34:04.316972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.316997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.317252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.317300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.317518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.317545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.317720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.317746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.317890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.317915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.318083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.318129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.318375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.318425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.318630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.318656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.318824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.318850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.319035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.319061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.319230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.319259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.319513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.319563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.319738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.319764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.319906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.319932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.320128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.320156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.320345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.320392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.320593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.320628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.320822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.320848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.321016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.321041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.321277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.321329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.321516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.321544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.321733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.321758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.321912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.321940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.322177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.322204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.322500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.322548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.322756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.322782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.322926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.322951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.323141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.323194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.323487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.323537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.323744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.323770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.323912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.323937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.324103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.324132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.324360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.324388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.324551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.324576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.324766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.324792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-23 03:34:04.324956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-23 03:34:04.324983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.325220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.325270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.325434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.325465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.325650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.325675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.325843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.325868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.326029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.326056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.326298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.326344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.326543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.326571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.326794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.326820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.327034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.327061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.327254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.327308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.327505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.327531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.327681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.327707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.327883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.327909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.328091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.328116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.328285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.328310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.328486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.328512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.328704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.328729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.328944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.328972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.329122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.329147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.329285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.329326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.329485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.329514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.329683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.329710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.329856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.329883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.330103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.330131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.330338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.330367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.330514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.330542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.330754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.330780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.330950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.330975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.331175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.331201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.331347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.331373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.331512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.331554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.331756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.331782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.331923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.331949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.332188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.332238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-23 03:34:04.332445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-23 03:34:04.332474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.332688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.332714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.332854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.332886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.333037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.333065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.333300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.333329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.333547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.333575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.333798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.333824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.334039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.334064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.334269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.334327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.334513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.334542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.334720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.334746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.334892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.334918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.335055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.335080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.335271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.335313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.335517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.335546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.335765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.335791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.335975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.336000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.336232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.336282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.336467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.336497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.336699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.336725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.336881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.336906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.337122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.337150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.337384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.337433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.337631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.337684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.337832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.337857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.338100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.338125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.338292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.338334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.338533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.338561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.338765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.338791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.338965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.338991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.339181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.339210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.339431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.339456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.339596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.339626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.339795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.339821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.340019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.340047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.340209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.340244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.340403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-23 03:34:04.340433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-23 03:34:04.340649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.340675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.340848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.340874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.341012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.341054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.341218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.341246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.341459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.341510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.341740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.341766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.341945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.341970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.342222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.342271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.342495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.342524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.342747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.342772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.342948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.342972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.343142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.343169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.343360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.343390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.343631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.343663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.343815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.343841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.344029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.344057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.344242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.344270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.344422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.344450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.344599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.344637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.344836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.344861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.345011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.345054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.345265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.345292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.345501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.345529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.345748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.345775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.345912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.345937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.346148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.346180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.346350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.346375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.346567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.346596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-23 03:34:04.346810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-23 03:34:04.346850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.347028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.347055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.347201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.347227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.347464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.347513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.347683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.347710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.347934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.347977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.348212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.348260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.348456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.348507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.348702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.348728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.348875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.348900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.349094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.349138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.349339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.349382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.349555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.349581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.349731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.349759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.350009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.350035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.350304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.350354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.350525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.350551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.350726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.350753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.350885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.350911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.351103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.351132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.351383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.351426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.351640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.351683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.351908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.351951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.352177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.352219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.352429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.352455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.352626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.352653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.352821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.352865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.353086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.353129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.353326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.353352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.353523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.353549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.353739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.353782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.353989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.354015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.354234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.354278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.354449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.354474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.354679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.354707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-23 03:34:04.354886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-23 03:34:04.354935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.355129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.355158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.355335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.355366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.355505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.355533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.355752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.355796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.355995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.356024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.356226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.356269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.356478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.356503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.356653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.356679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.356886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.356914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.357100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.357146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.357326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.357351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.357497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.357523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.357699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.357725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.357931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.357957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.358125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.358152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.358308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.358348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.358498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.358526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.358699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.358726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.358871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.358897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.359059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.359085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.359276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.359304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.359522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.359577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.359736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.359764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.359986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.360030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.360234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.360279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.360430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.360456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.360651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.360677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.360871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.360900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.361124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.361155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.361294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.361321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.361458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.361484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.361659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.361685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.361861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.361904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.362145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.362171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.362343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.362369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-23 03:34:04.362514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-23 03:34:04.362539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.362701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.362728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.362946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.362991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.363231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.363275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.363443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.363469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.363701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.363727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.363870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.363897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.364095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.364139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.364344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.364369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.364513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.364539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.364701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.364728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.364884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.364913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.365156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.365182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.365375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.365400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.365536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.365563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.365736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.365781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.365923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.365951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.366125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.366151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.366292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.366318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.366513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.366538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.366717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.366744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.366892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.366918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.367086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.367113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.367244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.367270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.367415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.367441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.367575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.367601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.367777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.367802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.368026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.368069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.368240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.368266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.368434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.368460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.368607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.368639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.368808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.368850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.369057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.369082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.369279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.369309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.369508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.369534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-23 03:34:04.369677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-23 03:34:04.369704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.369890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.369933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.370181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.370207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.370395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.370438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.370606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.370639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.370804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.370830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.371001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.371043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.371245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.371272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.371477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.371502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.371667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.371697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.371902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.371928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.372107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.372136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.372369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.372395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.372567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.372593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.372819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.372863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.373025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.373068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.373272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.373297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.373471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.373497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.373694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.373742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.373978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.374003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.374170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.374214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.374389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.374415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.374562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.374589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.374816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.374860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.375075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.375118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.375287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.375330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.375527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.375553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.375732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.375777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.375942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.375985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.376181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.376224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.376365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.376391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.376584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.376609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.376823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.376867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.377067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.377111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-23 03:34:04.377309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-23 03:34:04.377338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.377548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.377573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.377778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.377823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.378046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.378090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.378275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.378306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.378474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.378501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.378695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.378739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.378968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.379011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.379204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.379247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.379405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.379431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.379604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.379639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.379870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.379912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.380097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.380143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.380302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.380345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.380537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.380563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.380744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.380770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.380993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.381036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.381262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.381305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.381490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.381516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.381705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.381749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.381978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.382023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.382222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.382264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.382412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.382439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.382633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.382660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.382824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.382869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.383031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.383073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.383243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.383288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.383459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.383485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.383677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.383725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.383915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-23 03:34:04.383958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-23 03:34:04.384184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.384226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.384399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.384425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.384598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.384643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.384868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.384911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.385136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.385178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.385374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.385417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.385589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.385626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.385832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.385876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.386059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.386102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.386330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.386373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.386545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.386571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.386746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.386791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.386986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.387030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.387221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.387265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.387437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.387471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.387688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.387732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.387950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.387993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.388154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.388197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.388404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.388431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.388599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.388630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.388823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.388870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.389059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.389101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.389323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.389365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.389560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.389586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.389794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.389823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.390001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.390045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.390235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.390279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.390454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.390480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.390685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.390728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.390892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.390935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.391095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.391139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.391309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.391335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.391509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.391535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.391729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.391773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.391992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.392035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-23 03:34:04.392268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-23 03:34:04.392312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.392511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.392537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.392700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.392744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.392910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.392954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.393127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.393173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.393373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.393399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.393578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.393604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.393798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.393840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.394035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.394063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.394308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.394336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.394553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.394578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.394744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.394787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.394956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.395000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.395201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.395243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.395436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.395462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.395638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.395665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.395859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.395902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.396079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.396122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.396321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.396364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.396567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.396597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.396747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.396774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.396970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.397018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.397244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.397287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.397465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.397490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.397679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.397707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.397944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.397988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.398159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.398202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.398428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.398471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.398622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.398658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.398853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.398897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.399064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.399109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.399316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.399343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.399548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.399574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.399791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.399837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.400051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.400095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.400319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-23 03:34:04.400362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-23 03:34:04.400556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.400582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.400781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.400826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.401050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.401094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.401284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.401327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.401497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.401524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.401714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.401756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.401954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.401997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.402189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.402233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.402405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.402431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.402602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.402634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.402828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.402871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.403047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.403074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.403254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.403297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.403441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.403466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.403653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.403694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.403848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.403875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.404060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.404088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.404296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.404326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.404507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.404535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.404713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.404740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.404934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.404962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.405149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.405176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.405387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.405416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.405575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.405600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.405767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.405792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.406006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.406034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.406350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.406405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.406590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.406624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.406815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.406840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.407004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.407030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.407228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.407256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.407422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.407464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.407670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.407697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.407865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.407906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.408093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-23 03:34:04.408120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-23 03:34:04.408366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.408394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.408583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.408611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.408811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.408837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.408975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.409000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.409207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.409235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.409391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.409418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.409575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.409600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.409772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.409798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.409989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.410017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.410199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.410227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.410412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.410439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.410635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.410678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.410829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.410856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.411048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.411076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.412542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.412584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.412770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.412797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.412975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.413002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.413178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.413206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.413368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.413396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.413571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.413599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.413802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.413828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.413999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.414025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.414186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.414215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.414425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.414453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.414647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.414674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.414843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.414869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.415062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.415087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.415296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.415357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.415540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.415568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.415762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.415793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.415931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.415957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.416102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.416144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.416334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.416363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.416531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.416556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.416756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.416782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-23 03:34:04.416957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-23 03:34:04.416983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.417303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.417351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.417530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.417558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.417730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.417757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.417945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.417986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.418216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.418261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.418458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.418501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.418662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.418690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.418923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.418967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.419158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.419203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.419467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.419518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.419741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.419786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.419979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.420022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.420279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.420321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.420493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.420519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.420680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.420710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.420897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.420940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.421132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.421176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.421392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.421436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.421585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.421634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.421799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.421828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.422099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.422127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.422308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.422336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.422520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.422548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.422718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.422744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.422911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.422939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.423099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.423127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.423305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.423346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.423532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.423560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.423743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-23 03:34:04.423769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-23 03:34:04.423952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.423980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.424164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.424192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.424349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.424377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.424554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.424582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.424754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.424780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.424949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.424979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.425169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.425197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.425441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.425491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.425695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.425720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.425865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.425908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.426088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.426113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.426301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.426329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.426510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.426538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.426697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.426724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.426937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.426965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.427122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.427151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.427403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.427449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.427637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.427679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.427827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.427857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.428099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.428125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.428293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.428321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.428512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.428540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.428707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.428733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.428926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.428955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.429111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.429140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.429351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.429379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.429578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.429603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.429772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.429797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.430000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.430025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.430220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.430250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.430443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.430471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.430640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.430674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.430847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.430872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.431051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.431076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.431305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-23 03:34:04.431356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-23 03:34:04.431548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.431578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.431780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.431806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.431951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.431976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.432195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.432223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.432382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.432410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.432599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.432631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.432772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.432798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.432991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.433016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.433308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.433366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.433578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.433606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.433787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.433816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.433984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.434012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.434179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.434220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.434383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.434411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.434594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.434626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.434780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.434806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.434972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.434997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.435186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.435214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.435373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.435401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.435584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.435618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.435808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.435834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.436060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.436087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.436277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.436305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.436492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.436517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.436661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.436687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.436858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.436901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.437115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.437142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.437357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.437385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.437597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.437637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.437825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.437850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.438024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.438052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.438250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.438278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.438444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.438470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.438687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.438716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.438898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-23 03:34:04.438926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-23 03:34:04.439084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.439109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.439294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.439321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.439638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.439670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.439862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.439886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.440077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.440104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.440317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.440344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.440532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.440556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.440734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.440762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.440950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.440977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.441169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.441193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.441411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.441438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.441626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.441654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.441843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.441868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.442087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.442115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.442329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.442354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.442521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.442546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.442755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.442784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.442945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.442973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.443135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.443161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.443322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.443352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.443538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.443566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.443782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.443808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.443955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.443980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.444170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.444198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.444374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.444399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.444538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.444563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.444713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.444741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.444935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.444960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.445134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.445159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.445375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.445403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.445631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.445657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.445846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.445874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.446056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.446084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.446308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.446334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.446490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.446518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.446740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-23 03:34:04.446769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-23 03:34:04.446960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.446985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.447179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.447207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.447393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.447422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.447588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.447620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.447817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.447843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.448035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.448064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.448234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.448259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.448449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.448477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.448641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.448669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.448833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.448859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.449036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.449064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.449248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.449276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.449431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.449456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.449587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.449633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.449806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.449836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.450021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.450047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.450211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.450236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.450434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.450462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.450662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.450688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.450870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.450898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.451096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.451122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.451271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.451298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.451471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.451497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.451660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.451690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.451854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.451879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.452027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.452052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.452190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.452215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.452382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.452407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.452595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.452630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.452825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.452850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.453021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.453047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.453232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.453260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.453474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.453499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.453638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.453666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.453859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.453892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.454054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.454083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-23 03:34:04.454303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-23 03:34:04.454328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.454515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.454543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.454758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.454787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.454990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.455016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.455228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.455256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.455421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.455449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.455626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.455663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.455825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.455853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.456066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.456094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.456264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.456289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.456461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.456504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.456663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.456692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.456863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.456894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.457083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.457111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.457291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.457320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.457518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.457543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.457731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.457760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.457942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.457970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.458151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.458176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.458393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.458421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.458608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.458645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.458868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.458895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.459058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.459085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.459261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.459289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.459486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.459511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.459668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.459700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.459919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.459944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.460138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.460163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.460329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-23 03:34:04.460354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-23 03:34:04.460555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.460583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.460786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.460812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.460958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.460983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.461150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.461192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.461382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.461407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.461627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.461670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.461874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.461916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.462135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.462160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.462349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.462377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.462557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.462585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.462814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.462840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.463022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.463047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.463210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.463236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.463392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.463417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.463609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.463644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.463831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.463859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.464022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.464047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.464228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.464256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.464420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.464448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.464638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.464664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.464867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.464895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.465047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.465075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.465264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.465289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.465461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.465486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.465658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.465687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.465877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.465904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.466094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.466122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.466318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.466344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.466487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.466513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.466708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.466737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.466951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.466979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.467162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.467188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.467377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.467406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.467580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.467606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.467748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.467772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.467936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-23 03:34:04.467961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-23 03:34:04.468117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.468145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.468370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.468396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.468563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.468592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.468786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.468811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.468957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.468982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.469164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.469193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.469378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.469407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.469581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.469606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.469777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.469803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.469999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.470027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.470216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.470242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.470390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.470416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.470550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.470575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.470760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.470786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.470972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.471000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.471163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.471192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.471363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.471388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.471538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.471567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.471788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.471817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.471976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.472002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.472143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.472184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.472373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.472401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.472562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.472587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.472767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.472792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.473021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.473049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.473281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.473306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.473457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.473482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.473648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.473674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.473844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.473873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.474040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.474068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.474248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.474276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.474464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.474493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.474696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.474723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.474872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.474897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.475061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.475086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-23 03:34:04.475274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-23 03:34:04.475301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.475468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.475496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.475708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.475734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.475920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.475947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.476108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.476133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.476299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.476324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.476515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.476543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.476764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.476793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.476948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.476973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.477115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.477140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.477284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.477309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.477475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.477501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.477693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.477719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.477901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.477927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.478093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.478118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.478285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.478314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.478496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.478525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.478719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.478745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.478886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.478913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.479100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.479128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.479296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.479325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.479483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.479511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.479684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.479709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.479885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.479910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.480124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.480152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.480337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.480365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.480577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.480605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.480804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.480831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.481053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.481080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.481239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.481264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.481407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.481450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.481630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.481656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.481828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.481853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.482000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.482027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.482225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.482254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.482445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.482470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-23 03:34:04.482638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-23 03:34:04.482664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.482837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.482862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.483037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.483064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.483221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.483249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.483395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.483423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.483640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.483666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.483853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.483881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.484106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.484132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.484297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.484323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.484458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.484483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.484633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.484660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.484804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.484835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.485026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.485054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.485242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.485270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.485440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.485465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.485609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.485641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.485859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.485887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.486060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.486086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.486281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.486307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.486507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.486535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.486734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.486760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.486944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.486969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.487109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.487136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.487317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.487342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.487541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.487569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.487772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.487802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.487988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.488013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.488232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.488261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.488452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.488478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.488653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.488680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.488900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.488928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.489096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.489124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.489314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.489341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.489563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.489592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.489806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.489835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-23 03:34:04.490005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-23 03:34:04.490030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.490196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.490221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.490399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.490425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.490596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.490632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.490795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.490821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.490992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.491017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.491182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.491207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.491402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.491429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.491633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.491664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.491840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.491877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.492032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.492060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.492244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.492272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.492483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.492511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.492713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.492739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.492901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.492929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.493105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.493132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.493318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.493347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.493554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.493580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.493724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.493750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.493965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.493993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.494178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.494208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.494400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.494427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.494641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.494670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.494866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.494891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.495034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.495059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.495192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.495235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.495428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.495453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.495633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.495661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.495852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.495880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.496088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-23 03:34:04.496116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-23 03:34:04.496306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.496331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.496477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.496503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.496724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.496753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.496920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.496946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.497163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.497191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.497382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.497410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.497578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.497604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.497815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.497844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.498028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.498056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.498274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.498299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.498474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.498500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.498717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.498746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.498950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.498976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.499113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.499139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.499302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.499342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.499516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.499544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.499735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.499761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.499954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.499982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.500199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.500225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.500365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.500391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.500557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.500583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.500808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.500834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.501029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.501056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.501214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.501244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.501409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.501435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.501599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.501633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.501842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.501867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.502035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.502061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.502231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.502259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.502469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.502497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.502699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.502734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.502934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.502962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.503153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.503182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.503339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.503365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.503547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.503575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.503743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.503776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-23 03:34:04.503966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-23 03:34:04.503991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.504155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.504182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.504335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.504364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.504552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.504577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.504743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.504768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.504913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.504941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.505141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.505166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.505350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.505378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.505566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.505591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.505812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.505857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.506032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.506059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.506261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.506305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.506483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.506510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.506715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.506741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.506917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.506943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.507166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.507210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.507418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.507445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.507622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.507649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.507845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.507893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.508100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.508143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.508364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.508393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.508571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.508596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.508766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.508814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.508982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.509027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.509272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.509322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.509506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.509531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.509718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.509767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.509994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.510036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.510201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.510243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.510416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.510442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.510644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.510680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.510852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.510895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.511121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.511165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.511427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.511471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.511697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.511740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.511947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-23 03:34:04.511990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-23 03:34:04.512187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.512212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.512356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.512381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.512557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.512582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.512774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.512801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.512997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.513022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.513216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.513260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.513401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.513426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.513594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.513628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.513821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.513866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.514090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.514136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.514330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.514372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.514568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.514593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.514807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.514852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.515040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.515084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.515285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.515329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.515529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.515554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.515739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.515784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.515960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.516005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.516206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.516248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.516399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.516425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.516566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.516591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.516808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.516852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.517050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.517080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.517311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.517340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.517505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.517531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.517675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.517701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.517876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.517905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.518179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.518207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.518400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.518426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.518619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.518646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.518840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.518869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.519066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.519095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.519287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.519315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.519498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.519543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.519712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-23 03:34:04.519738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-23 03:34:04.519925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.519953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.520158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.520209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.520361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.520389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.520577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.520605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.520774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.520799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.521024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.521053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.521319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.521347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.521590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.521628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.521799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.521824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.521996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.522025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.522296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.522341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.522502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.522530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.522731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.522757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.522929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.522956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.523155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.523183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.523401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.523429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.523610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.523645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.523802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.523827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.524016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.524045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.524333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.524379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.524588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.524624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.524791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.524817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.524956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.524982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.525140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.525170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.525360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.525389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.525541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.525566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.525763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.525789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.525938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.525964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.526172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.526202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.526391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.526420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.526587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.526618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.526784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.526810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.527026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.527054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.527239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.527267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.527443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.527471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.527636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-23 03:34:04.527680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-23 03:34:04.527830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.527855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.528056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.528081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.528249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.528291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.528473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.528500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.528689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.528716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.528861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.528900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.529170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.529198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.529432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.529460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.529642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.529684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.529855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.529880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.530081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.530106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.530304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.530332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.530552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.530581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.530791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.530817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.531014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.531042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.531211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.531239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.531449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.531478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.531673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.531699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.531844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.531870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.532054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.532079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.532249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.532278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.532469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.532497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.532682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.532709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.532852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.532894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.533079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.533107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.533290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.533318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.533506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.533535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.533785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.533811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.533984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.534009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.534203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.534232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.534467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-23 03:34:04.534495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-23 03:34:04.534684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.534711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.534849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.534876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.535094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.535127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.535390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.535439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.535599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.535635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.535824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.535849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.536018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.536044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.536234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.536262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.536444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.536472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.536669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.536695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.536866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.536892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.537088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.537115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.537370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.537418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.537630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.537674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.537845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.537871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.538043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.538068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.538219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.538244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.538417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.538442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.538642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.538669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.538869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.538897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.539060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.539088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.539325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.539350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.539547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.539575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.539749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.539777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.539950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.539976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.540149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.540177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.540366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.540394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.540585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.540618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.540812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.540841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.541001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.541033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.541229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.541255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.541401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.541427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.541595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.541655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.541829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.541854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.541998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.542023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-23 03:34:04.542191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-23 03:34:04.542217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.542357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.542382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.542563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.542589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.542761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.542787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.542955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.542982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.543121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.543146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.543326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.543354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.543543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.543571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.543794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.543821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.544003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.544032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.544217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.544243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.544426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.544455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.544697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.544723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.544898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.544923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.545110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.545139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.545337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.545362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.545508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.545533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.545755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.545784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.545971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.545999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.546169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.546194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.546357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.546385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.546566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.546599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.546803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.546828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.547013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.547040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.547223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.547251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.547445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.547471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.547642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.547685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.547876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.547905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.548077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.548102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.548244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.548269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.548422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.548447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.548617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.548644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.548831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.548859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.549047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.549074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.549286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.549310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-23 03:34:04.549471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-23 03:34:04.549500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.549702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.549728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.549873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.549900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.550071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.550096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.550304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.550332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.550521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.550546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.550760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.550790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.550948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.550976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.551141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.551167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.551349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.551374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.551564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.551592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.551765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.551792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.551980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.552008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.552193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.552218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.552397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.552422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.552605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.552643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.552842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.552868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.553033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.553059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.553226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.553256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.553424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.553452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.553645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.553671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.553892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.553920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.554130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.554158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.554354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.554379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.554571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.554601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.554805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.554830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.555049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.555077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.555238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.555271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.555438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.555463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.555611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.555644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.555840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.555866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.556083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.556111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.556307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.556333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.556501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.556542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.556743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.556769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.556918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-23 03:34:04.556944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-23 03:34:04.557117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.557142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.557306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.557331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.557514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.557540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.557764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.557793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.557957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.557986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.558157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.558184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.558375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.558403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.558628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.558654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.558798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.558824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.559042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.559070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.559282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.559310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.559502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.559526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.559724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.559749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.559956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.559981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.560173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.560198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.560364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.560390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.560548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.560576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.560792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.560818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.560964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.560993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.561196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.561224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.561390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.561415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.561644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.561673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.561828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.561857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.562038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.562063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.562211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.562254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.562442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.562470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.562697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.562730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.562911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.562939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.563100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.563128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.563286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.563311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.563473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.563503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.563663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.563692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.563871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.563897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.564090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.564118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.564272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.564299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.564462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.128 [2024-07-23 03:34:04.564487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.128 qpair failed and we were unable to recover it. 00:34:38.128 [2024-07-23 03:34:04.564626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.564672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.564877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.564902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.565067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.565092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.565226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.565251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.565394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.565419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.565585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.565611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.565787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.565812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.565956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.565982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.566148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.566173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.566314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.566343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.566483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.566508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.566682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.566708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.566873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.566901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.567090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.567115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.567289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.567314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.567533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.567560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.567746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.567774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.567941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.567966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.568112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.568137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.568277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.568302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.568518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.568545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.568715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.568750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.568897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.568942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.569137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.569163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.569343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.569371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.569521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.569549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.569766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.569791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.569974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.570002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.570169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.570194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.570364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.570389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.570577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.129 [2024-07-23 03:34:04.570606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.129 qpair failed and we were unable to recover it. 00:34:38.129 [2024-07-23 03:34:04.570781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.570806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.571000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.571025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.571217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.571245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.571432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.571460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.571652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.571678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.571854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.571883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.572042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.572070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.572257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.572282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.572475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.572500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.572689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.572717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.572921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.572946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.573170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.573199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.573374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.573399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.573597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.573636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.573838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.573864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.574032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.574061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.574239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.574265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.574451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.574479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.574642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.574672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.574900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.574925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.575072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.575097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.575237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.575263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.575457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.575483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.575670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.575700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.575848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.575876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.576070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.576095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.576233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.576258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.576394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.576418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.576593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.576634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.576870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.576898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.577119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.577147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.577350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.577375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.577524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.577550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.577745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.577774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.130 [2024-07-23 03:34:04.577961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.130 [2024-07-23 03:34:04.577986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.130 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.578205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.578233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.578403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.578429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.578631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.578674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.578822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.578847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.579034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.579062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.579249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.579275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.579416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.579442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.579577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.579602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.579801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.579827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.580052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.580081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.580270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.580297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.580488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.580516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.580685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.580715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.580903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.580931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.581094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.581119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.581265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.581290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.581464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.581489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.581662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.581688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.581850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.581878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.582051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.582076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.582270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.582295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.582461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.582489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.582688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.582714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.582920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.582945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.583168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.583196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.583379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.583404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.583586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.583611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.583770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.583795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.583956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.583981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.584174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.584199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.584386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.584414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.584594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.584629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.584830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.584855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.585069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.585097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.585313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.131 [2024-07-23 03:34:04.585340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.131 qpair failed and we were unable to recover it. 00:34:38.131 [2024-07-23 03:34:04.585504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.585529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.585688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.585717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.585888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.585913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.586080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.586109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.586300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.586328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.586521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.586545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.586739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.586765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.586981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.587008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.587200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.587227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.587391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.587417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.587617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.587648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.587834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.587862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.588030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.588056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.588225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.588250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.588384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.588410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.588651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.588677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.588844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.588872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.589052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.589080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.589291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.589316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.589477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.589505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.589683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.589708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.589909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.589934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.590134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.590162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.590374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.590403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.590587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.590621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.590813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.590838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.591056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.591081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.591252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.591277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.591461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.591488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.591707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.591736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.591931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.591960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.592180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.592207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.592401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.592426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.592598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.592629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.592804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.592830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.593019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.593047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.593239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.132 [2024-07-23 03:34:04.593264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.132 qpair failed and we were unable to recover it. 00:34:38.132 [2024-07-23 03:34:04.593437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.593462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.593599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.593632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.593799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.593825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.594029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.594058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.594206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.594234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.594419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.594444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.594679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.594706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.594859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.594885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.595114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.595140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.595333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.595360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.595515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.595543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.595740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.595765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.595930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.595959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.596155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.596180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.596347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.596372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.596584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.596625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.596812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.596837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.597035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.597060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.597255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.597283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.597440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.597468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.597684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.597710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.597916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.597941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.598167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.598194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.598385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.598411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.598600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.598636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.598822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.598848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.599004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.599030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.599242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.599270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.599458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.599486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.599705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.599732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.599956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.599982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.600149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.600174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.600317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.600343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.600528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.600556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.600751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.600780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.600979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.601004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.601202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.601229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.133 qpair failed and we were unable to recover it. 00:34:38.133 [2024-07-23 03:34:04.601418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.133 [2024-07-23 03:34:04.601445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.601628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.601663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.601824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.601849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.602023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.602051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.602266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.602291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.602482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.602511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.602725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.602754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.602951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.602976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.603170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.603198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.603402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.603430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.603620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.603672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.603852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.603903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.604089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.604118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.604312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.604337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.604527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.604555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.604748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.604775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.604940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.604965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.605129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.605159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.605308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.605336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.605518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.605546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.605704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.605731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.605922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.605950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.606127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.606155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.606367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.606396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.606565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.606594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.606818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.606844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.606999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.607028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.607223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.607248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.607410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.607436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.607630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.607659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.607843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.607871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.608057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.608083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.608246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.608271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.608463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.608492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.608663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.134 [2024-07-23 03:34:04.608689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.134 qpair failed and we were unable to recover it. 00:34:38.134 [2024-07-23 03:34:04.608850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.608876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.609021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.609064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.609255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.609280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.609429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.609470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.609638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.609681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.609867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.609895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.610039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.610065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.610233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.610259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.610393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.610419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.610582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.610621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.610803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.610831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.610998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.611023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.611216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.611245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.611428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.611456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.611626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.611658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.611858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.611905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.612082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.612115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.612283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.612308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.612492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.612520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.612711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.612740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.612895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.612921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.613068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.613110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.613322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.613350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.613525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.613550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.613698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.613725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.613921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.613946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.614131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.614157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.614306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.614333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.614542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.614567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.614778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.614804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.615001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.615029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.615217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.615246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.615435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.615464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.615669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.615695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.615843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.615869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.616081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.616107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.616256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.616282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.616421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.616446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.616593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.616624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.135 qpair failed and we were unable to recover it. 00:34:38.135 [2024-07-23 03:34:04.616795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.135 [2024-07-23 03:34:04.616820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.616981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.617007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.617142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.617169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.617336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.617364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.617559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.617584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.617769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.617795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.617988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.618014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.618214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.618243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.618429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.618454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.618654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.618683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.618872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.618901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.619096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.619121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.619296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.619321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.619544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.619569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.619760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.619786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.619956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.619984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.620163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.620191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.620382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.620408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.620635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.620664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.620820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.620849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.621021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.621046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.621241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.621267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.621469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.621497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.621689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.621715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.621931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.621960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.622129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.622154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.622327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.622352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.622515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.622543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.622743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.622769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.622937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.622962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.623156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.623184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.623365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.623394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.623559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.623584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.623746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.623772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.623906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.623932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.624074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.624100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.624245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.624270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.624482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.624510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.624671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.624697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.624834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.136 [2024-07-23 03:34:04.624875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.136 qpair failed and we were unable to recover it. 00:34:38.136 [2024-07-23 03:34:04.625065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.625093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.625278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.625304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.625475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.625500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.625685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.625714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.625880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.625905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.626093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.626125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.626285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.626313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.626507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.626533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.626692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.626719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.626939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.626967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.627131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.627156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.627309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.627334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.627505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.627530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.627702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.627729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.627926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.627955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.628111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.628139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.628333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.628359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.628544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.628572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.628746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.628774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.628943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.628968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.629136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.629178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.629328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.629356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.629544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.629569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.629720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.629746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.629973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.630001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.630223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.630248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.630436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.630465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.630674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.630703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.630867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.630892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.631061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.631087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.631285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.631313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.631481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.631506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.631693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.631726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.631922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.631950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.632114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.632139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.632305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.632348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.632549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.632574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.632776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.632802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.632998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.633028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.633242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.633270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.633463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.633488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.137 qpair failed and we were unable to recover it. 00:34:38.137 [2024-07-23 03:34:04.633702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.137 [2024-07-23 03:34:04.633731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.633906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.633932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.634105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.634131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.634321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.634349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.634509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.634538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.634727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.634754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.634888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.634929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.635138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.635166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.635335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.635360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.635503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.635528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.635728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.635758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.635945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.635970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.636114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.636139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.636279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.636320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.636512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.636539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.636728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.636757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.636951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.636979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.637146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.637172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.637348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.637374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.637550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.637575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.637788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.637814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.637984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.638013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.638226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.638255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.638450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.638475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.638649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.638675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.138 [2024-07-23 03:34:04.638849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.138 [2024-07-23 03:34:04.638877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.138 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.639062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.639088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.639319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.639347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.639502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.639530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.639696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.639722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.639864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.639898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.640080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.640105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.640295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.640321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.640466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.640491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.640641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.640667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.640818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.640845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.640997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.641022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.641211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.641239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.641424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.641448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.641608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.641654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.641840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.641868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.642090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.642115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.642273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.642301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.642458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.642485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.642679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.642706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.642857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.642882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.643054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.643079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.419 [2024-07-23 03:34:04.643246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.419 [2024-07-23 03:34:04.643271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.419 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.643440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.643467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.643669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.643695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.643836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.643861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.643997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.644022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.644215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.644243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.644428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.644453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.644596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.644628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.644844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.644873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.645059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.645084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.645268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.645296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.645451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.645478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.645645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.645675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.645818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.645843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.646002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.646027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.646171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.646196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.646358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.646386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.646568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.646596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.646808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.646833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.647057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.647085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.647235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.647263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.647449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.647474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.647669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.647700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.647870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.647899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.648077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.648102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.648234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.648259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.648409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.648450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.648644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.648670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.648818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.420 [2024-07-23 03:34:04.648843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.420 qpair failed and we were unable to recover it. 00:34:38.420 [2024-07-23 03:34:04.649048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.649073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.649280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.649307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.649471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.649499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.649697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.649723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.649866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.649893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.650084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.650112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.650291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.650319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.650529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.650554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.650754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.650784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.650976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.651004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.651171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.651200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.651388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.651417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.651633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.651662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.651844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.651869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.652033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.652063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.652257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.652286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.652452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.652479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.652631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.652667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.652840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.652868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.653060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.653085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.653238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.653264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.653455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.653481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.653653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.653679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.653851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.653878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.654063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.654092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.654284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.654309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.654500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.654528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.654686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.654715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.654879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.654905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.655082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.655125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.655324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.655349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.655517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.421 [2024-07-23 03:34:04.655542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.421 qpair failed and we were unable to recover it. 00:34:38.421 [2024-07-23 03:34:04.655736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.655766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.655916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.655944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.656100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.656126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.656276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.656301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.656446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.656471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.656618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.656648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.656846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.656874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.657024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.657051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.657226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.657250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.657421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.657446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.657603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.657650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.657803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.657829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.658022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.658050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.658203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.658231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.658424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.658449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.658626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.658669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.658802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.658827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.658974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.659001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.659191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.659221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.659386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.659414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.659593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.659628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.659795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.659820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.659966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.659991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.660195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.660223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.660413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.660440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.660629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.660673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.660843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.660869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.661057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.661085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.661271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.661300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.661554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.661582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.661773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.661798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.661964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.661989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.662159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.422 [2024-07-23 03:34:04.662184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.422 qpair failed and we were unable to recover it. 00:34:38.422 [2024-07-23 03:34:04.662349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.662375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.662535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.662560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.662742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.662768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.662946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.662971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.663121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.663163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.663354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.663379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.663564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.663592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.663817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.663846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.664052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.664077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.664228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.664253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.664441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.664469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.664641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.664667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.664830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.664858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.665047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.665076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.665246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.665271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.665439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.665465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.665659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.665687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.665878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.665905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.666094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.666122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.666309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.666337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.666549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.666577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.666796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.666822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.667018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.667046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.667263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.667289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.667482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.667510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.667693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.667722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.667913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.667938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.668140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.668166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.668391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.668417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.668586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.668612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.668819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.668847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.669020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.669045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.669183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.669208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.669375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.669417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.669603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.669650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.669846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.669872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.670021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.670047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.423 [2024-07-23 03:34:04.670222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.423 [2024-07-23 03:34:04.670250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.423 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.670464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.670489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.670627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.670653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.670860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.670902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.671069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.671094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.671294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.671322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.671522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.671547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.671697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.671723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.671865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.671905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.672102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.672130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.672324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.672348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.672511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.672538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.672761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.672787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.672999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.673024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.673188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.673215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.673367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.673395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.673622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.673648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.673851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.673879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.674041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.674068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.674230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.674255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.674471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.674499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.674698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.674727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.674916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.674941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.675159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.675187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.675376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.675404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.675596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.675628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.675799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.675827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.676021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.676049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.676235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.676260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.676453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.676483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.424 qpair failed and we were unable to recover it. 00:34:38.424 [2024-07-23 03:34:04.676675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.424 [2024-07-23 03:34:04.676708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.676898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.676924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.677108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.677136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.677326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.677354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.677549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.677575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.677762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.677788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.677981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.678009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.678167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.678192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.678340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.678365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.678543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.678571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.678759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.678786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.678929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.678955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.679115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.679144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.679361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.679387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.679595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.679628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.679798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.679824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.679988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.680014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.680208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.680236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.680391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.680420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.680632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.680663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.680825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.680853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.681048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.681078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.681243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.681268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.681442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.681467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.681627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.681656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.681826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.681852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.682014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.682043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.682228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.682255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.682473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.682498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.682681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.682710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.682896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.682924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.683091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.683116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.683289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.683331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.683521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.683549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.683746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.683772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.683930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.683956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.684144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.684172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.425 qpair failed and we were unable to recover it. 00:34:38.425 [2024-07-23 03:34:04.684342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.425 [2024-07-23 03:34:04.684368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.684515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.684555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.684750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.684779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.684954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.684979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.685171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.685199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.685375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.685403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.685591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.685628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.685858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.685886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.686047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.686077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.686270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.686295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.686512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.686540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.686766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.686795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.686992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.687017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.687210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.687237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.687445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.687473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.687642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.687670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.687864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.687907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.688094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.688122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.688292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.688317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.688486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.688514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.688693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.688722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.688891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.688916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.689073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.689102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.689318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.689346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.689506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.689533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.689701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.689727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.689874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.689919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.690097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.690123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.690338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.690366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.690545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.690573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.690751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.690778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.690955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.690984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.691171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.691199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.691383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.691408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.691582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.691607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.691763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.691788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.691931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.691957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.426 [2024-07-23 03:34:04.692172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.426 [2024-07-23 03:34:04.692201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.426 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.692355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.692383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.692578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.692604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.692809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.692837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.693001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.693030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.693198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.693224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.693366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.693391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.693559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.693587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.693776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.693802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.693949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.693974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.694196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.694223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.694402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.694427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.694653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.694682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.694885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.694911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.695082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.695107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.695282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.695307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.695472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.695500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.695682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.695709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.695860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.695885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.696066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.696091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.696261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.696287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.696431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.696460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.696649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.696678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.696868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.696894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.697082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.697110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.697263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.697291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.697461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.697486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.697669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.697695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.697886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.697914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.698105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.698130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.698353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.698382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.698598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.698630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.698772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.698797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.698954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.698979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.699181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.699206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.699417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.699442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.699628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.699667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.427 [2024-07-23 03:34:04.699852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.427 [2024-07-23 03:34:04.699889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.427 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.700057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.700084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.700271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.700300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.700488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.700516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.700744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.700770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.701002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.701030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.701216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.701244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.701435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.701460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.701670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.701699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.701857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.701885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.702075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.702100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.702265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.702294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.702475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.702503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.702697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.702724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.702924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.702953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.703121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.703148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.703344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.703369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.703536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.703564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.703723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.703753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.703914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.703939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.704108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.704133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.704275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.704301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.704437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.704462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.704659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.704686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.704878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.704909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.705046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.705071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.705216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.705260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.705420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.705445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.705621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.705647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.705813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.705838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.706032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.706060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.706243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.706270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.706491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.706520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.706686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.706715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.706883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.706908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.707114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.707139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.707379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.707407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.707592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.428 [2024-07-23 03:34:04.707629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.428 qpair failed and we were unable to recover it. 00:34:38.428 [2024-07-23 03:34:04.707835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.707861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.708060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.708085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.708254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.708280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.708497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.708526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.708711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.708740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.708929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.708955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.709170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.709198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.709383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.709410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.709575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.709599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.709837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.709866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.710063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.710092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.710259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.710284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.710437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.710466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.710650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.710680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.710897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.710924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.711117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.711146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.711331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.711359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.711549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.711575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.711752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.711778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.711918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.711944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.712095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.712120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.712287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.712314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.712499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.712527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.712689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.712715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.712865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.712891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.713118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.713146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.713342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.713368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.713537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.713565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.713751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.713777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.713937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.713962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.714149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.714177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.714338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.429 [2024-07-23 03:34:04.714367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.429 qpair failed and we were unable to recover it. 00:34:38.429 [2024-07-23 03:34:04.714526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.714552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.714726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.714752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.714915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.714940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.715100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.715125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.715316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.715344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.715527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.715555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.715734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.715760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.715979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.716007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.716192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.716220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.716418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.716447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.716638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.716666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.716845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.716874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.717071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.717096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.717244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.717269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.717415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.717440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.717604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.717646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.717831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.717859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.718049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.718075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.718219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.718244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.718434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.718462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.718659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.718685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.718854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.718879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.719051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.719076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.719229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.719254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.719423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.719448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.719662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.719690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.719873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.719901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.720065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.720090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.720260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.720285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.720503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.720531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.720728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.720755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.720903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.720929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.721156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.721184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.721380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.721406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.721569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.721598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.721774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.721800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.721952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.721984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.430 [2024-07-23 03:34:04.722169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.430 [2024-07-23 03:34:04.722198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.430 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.722384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.722413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.722609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.722641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.722833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.722861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.723021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.723049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.723233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.723258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.723427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.723470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.723637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.723666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.723886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.723912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.724114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.724139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.724285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.724311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.724509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.724534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.724697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.724726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.724939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.724968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.725138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.725164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.725304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.725329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.725472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.725497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.725670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.725696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.725868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.725897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.726070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.726095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.726261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.726286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.726500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.726529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.726725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.726751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.726933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.726958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.727170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.727197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.727391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.727419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.727578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.727603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.727762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.727787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.727979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.728004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.728221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.728246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.728415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.728441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.728649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.728678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.728840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.728865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.729013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.729040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.729204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.729230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.729402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.729427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.729625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.729652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.729828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.431 [2024-07-23 03:34:04.729856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.431 qpair failed and we were unable to recover it. 00:34:38.431 [2024-07-23 03:34:04.730043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.730069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.730284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.730312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.730501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.730529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.730701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.730727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.730893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.730933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.731117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.731145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.731302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.731329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.731516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.731545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.731740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.731769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.731976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.732001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.732151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.732176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.732358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.732383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.732582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.732610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.732816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.732842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.733028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.733057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.733255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.733280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.733456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.733482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.733656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.733685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.733873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.733898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.734116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.734144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.734341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.734366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.734530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.734556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.734698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.734724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.734940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.734968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.735160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.735186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.735373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.735400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.735588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.735623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.735780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.735806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.735940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.735982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.736146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.736179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.736374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.736401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.736625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.736654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.736844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.736872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.737035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.737060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.737226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.737268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.737436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.737464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.737653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.432 [2024-07-23 03:34:04.737680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.432 qpair failed and we were unable to recover it. 00:34:38.432 [2024-07-23 03:34:04.737867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.737895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.738107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.738134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.738322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.738349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.738488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.738531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.738726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.738752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.738948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.738973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.739139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.739168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.739371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.739397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.739625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.739670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.739839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.739864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.740071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.740096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.740268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.740293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.740507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.740535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.740726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.740755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.740935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.740960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.741101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.741126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.741311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.741339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.741535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.741560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.741754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.741782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.741972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.742004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.742189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.742214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.742398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.742426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.742595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.742631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.742827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.742852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.743072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.743099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.743280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.743308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.743528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.743552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.743743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.743771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.743967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.743992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.744237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.744262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.744463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.744492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.744711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.744740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.744925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.744950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.745115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.745143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.745341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.745366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.745527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.745552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.745715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.745741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.433 [2024-07-23 03:34:04.745904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.433 [2024-07-23 03:34:04.745931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.433 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.746120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.746145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.746305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.746333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.746543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.746570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.746762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.746789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.747012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.747040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.747225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.747253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.747428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.747453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.747654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.747680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.747823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.747853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.748002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.748027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.748212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.748237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.748404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.748428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.748594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.748624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.748818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.748848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.749058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.749085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.749250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.749275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.749461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.749489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.749680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.749709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.749874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.749898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.750056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.750083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.750282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.750307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.750469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.750494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.750689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.750718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.750879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.750908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.751065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.751090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.751263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.751288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.751462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.751488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.751651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.751677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.751869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.751898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.752158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.752186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.752377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.752401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.434 [2024-07-23 03:34:04.752593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.434 [2024-07-23 03:34:04.752630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.434 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.752852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.752877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.753043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.753068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.753203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.753228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.753458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.753485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.753690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.753716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.753889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.753914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.754101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.754129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.754324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.754349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.754536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.754564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.754744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.754769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.754966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.754991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.755181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.755209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.755381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.755406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.755578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.755603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.755779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.755804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.755972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.756001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.756172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.756197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.756375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.756400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.756567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.756595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.756792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.756818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.756991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.757016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.757216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.757240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.757407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.757434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.757659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.757688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.757849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.757877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.758070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.758094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.758285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.758313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.758513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.758538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.758703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.758728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.758917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.758945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.759133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.759161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.759385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.759410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.759562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.759587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.759734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.759759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.759940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.759965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.760158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.760186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.760342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.760370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.435 [2024-07-23 03:34:04.760582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.435 [2024-07-23 03:34:04.760610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.435 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.760788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.760813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.761026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.761054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.761272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.761298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.761488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.761516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.761705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.761734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.761895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.761920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.762129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.762161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.762368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.762396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.762562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.762586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.762745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.762771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.762997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.763025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.763203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.763228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.763418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.763447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.763619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.763648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.763843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.763868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.764052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.764080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.764297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.764325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.764527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.764552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.764726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.764752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.764947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.764974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.765144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.765170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.765353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.765377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.765576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.765601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.765792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.765817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.766044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.766070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.766212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.766238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.766433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.766459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.766665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.766691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.766858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.766883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.767097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.767123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.767314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.767342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.767529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.767557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.767748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.767774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.767923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.767952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.768132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.768158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.768339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.768364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.768503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.436 [2024-07-23 03:34:04.768526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.436 qpair failed and we were unable to recover it. 00:34:38.436 [2024-07-23 03:34:04.768702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.768730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.768898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.768923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.769068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.769092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.769298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.769326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.769488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.769513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.769664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.769690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.769857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.769882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.770052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.770077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.770213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.770239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.770431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.770459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.770714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.770741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.770883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.770925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.771107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.771134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.771283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.771308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.771497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.771526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.771706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.771732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.771881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.771905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.772112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.772140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.772321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.772348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.772544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.772569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.772720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.772747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.773001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.773029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.773221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.773246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.773388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.773413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.773565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.773590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.773764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.773790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.774014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.774039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.774237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.774262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.774476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.774501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.774756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.774784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.774969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.774996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.775180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.775205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.775351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.775376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.775549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.775574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.775745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.775771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.775933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.775958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.776149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.776176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.437 qpair failed and we were unable to recover it. 00:34:38.437 [2024-07-23 03:34:04.776347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.437 [2024-07-23 03:34:04.776372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.776595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.776631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.776791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.776818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.776978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.777003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.777187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.777214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.777431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.777459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.777666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.777708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.777855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.777881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.778076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.778103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.778270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.778294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.778440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.778465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.778630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.778673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.778905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.778931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.779102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.779126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.779278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.779304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.779451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.779477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.779625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.779651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.779798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.779840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.780004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.780030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.780197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.780239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.780452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.780480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.780692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.780717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.780936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.780963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.781132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.781157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.781351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.781376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.781564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.781592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.781788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.781813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.781982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.782011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.782196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.782224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.782380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.782408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.782627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.782669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.782844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.782870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.438 qpair failed and we were unable to recover it. 00:34:38.438 [2024-07-23 03:34:04.783079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.438 [2024-07-23 03:34:04.783105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.783277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.783301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.783476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.783501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.783690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.783718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.783877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.783902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.784087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.784115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.784299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.784326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.784512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.784536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.784756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.784785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.784966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.784991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.785158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.785183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.785332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.785360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.785553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.785578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.785768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.785794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.785931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.785957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.786168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.786196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.786409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.786434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.786597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.786635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.786890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.786915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.787114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.787139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.787330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.787358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.787562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.787591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.787770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.787799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.788013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.788042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.788219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.788247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.788435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.788460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.788661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.788690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.788876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.788904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.789091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.789115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.789307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.789334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.789524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.789554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.789730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.789756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.789927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.789970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.790160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.790188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.790396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.790421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.790611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.790647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.790837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.790867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.791061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.439 [2024-07-23 03:34:04.791087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.439 qpair failed and we were unable to recover it. 00:34:38.439 [2024-07-23 03:34:04.791274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.791302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.791488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.791516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.791708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.791734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.791886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.791911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.792166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.792193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.792379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.792403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.792593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.792629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.792821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.792849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.793016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.793041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.793227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.793255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.793435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.793463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.793646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.793676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.793871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.793899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.794044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.794072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.794235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.794260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.794447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.794475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.794659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.794688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.794909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.794934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.795164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.795191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.795377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.795406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.795582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.795609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.795819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.795844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.796030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.796058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.796250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.796275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.796477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.796504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.796663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.796694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.796891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.796917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.797083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.797113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.797266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.797293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.797463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.797488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.797635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.797662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.797825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.797853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.798010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.798035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.798251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.798279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.798442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.798470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.798637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.798661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.798876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.798904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.440 [2024-07-23 03:34:04.799061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.440 [2024-07-23 03:34:04.799091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.440 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.799306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.799331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.799497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.799525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.799687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.799715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.799921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.799946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.800095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.800121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.800320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.800345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.800575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.800604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.800834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.800859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.801061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.801089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.801301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.801326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.801520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.801548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.801740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.801767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.801962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.801988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.802173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.802198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.802374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.802399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.802568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.802595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.802753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.802779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.802967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.802995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.803159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.803183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.803331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.803356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.803536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.803564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.803764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.803790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.804001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.804029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.804224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.804249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.804419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.804444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.804621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.804666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.804839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.804864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.805042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.805067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.805253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.805278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.805441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.805469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.805657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.805683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.805899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.805927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.806089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.806114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.806326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.806354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.806554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.806579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.806753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.806779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.806942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.806968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.441 [2024-07-23 03:34:04.807109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.441 [2024-07-23 03:34:04.807134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.441 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.807304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.807330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.807494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.807522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.807711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.807736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.807917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.807952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.808168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.808193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.808384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.808412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.808570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.808598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.808795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.808823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.809011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.809036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.809210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.809234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.809422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.809451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.809640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.809669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.809867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.809893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.810043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.810069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.810255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.810282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.810463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.810491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.810646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.810672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.810850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.810875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.811036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.811062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.811252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.811277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.811476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.811501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.811712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.811738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.811926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.811954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.812106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.812132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.812300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.812325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.812475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.812500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.812707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.812733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.812891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.812919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.813112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.813139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.813333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.813361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.813600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.813642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.813830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.813856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.814089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.814139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.814319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.814347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.814531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.814558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.814743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.814777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.814946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.814974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.442 [2024-07-23 03:34:04.815129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.442 [2024-07-23 03:34:04.815157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.442 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.815344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.815372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.815580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.815607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.815817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.815843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.816037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.816065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.816279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.816309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.816485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.816514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.816745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.816770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.816939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.816967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.817159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.817185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.817382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.817407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.817619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.817645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.817814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.817845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.818033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.818060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.818271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.818296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.818436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.818461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.818598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.818647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.818826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.818854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.819037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.819065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.819258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.819285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.819479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.819507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.819723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.819751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.819907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.819935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.820126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.820151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.820313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.820340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.820536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.820564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.820729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.820758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.820930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.820955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.821105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.821130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.821263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.821288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.821454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.821479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.821655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.821680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.821873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.443 [2024-07-23 03:34:04.821903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.443 qpair failed and we were unable to recover it. 00:34:38.443 [2024-07-23 03:34:04.822054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.822082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.822308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.822336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.822530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.822555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.822736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.822762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.822951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.822979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.823197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.823224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.823412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.823437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.823640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.823683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.823858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.823884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.824082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.824110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.824281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.824307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.824524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.824551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.824712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.824743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.824933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.824961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.825154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.825180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.825376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.825405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.825624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.825652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.825809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.825837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.826010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.826035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.826200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.826226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.826393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.826422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.826604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.826638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.826813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.826838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.827034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.827059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.827264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.827289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.827473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.827500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.827673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.827700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.827882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.827907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.828110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.828144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.828341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.828368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.828561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.828586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.828768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.828794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.828997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.829025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.829209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.829237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.829427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.829453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.829628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.829654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.829797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.829822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.444 [2024-07-23 03:34:04.829990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.444 [2024-07-23 03:34:04.830034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.444 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.830219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.830244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.830437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.830465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.830672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.830698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.830868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.830894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.831072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.831097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.831237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.831279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.831457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.831485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.831671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.831700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.831887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.831914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.832102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.832131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.832324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.832352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.832503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.832531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.832720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.832745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.832935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.832963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.833179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.833206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.833418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.833446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.833657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.833683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.833825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.833873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.834064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.834091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.834249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.834277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.834465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.834490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.834658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.834702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.834865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.834893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.835081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.835109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.835265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.835290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.835474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.835502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.835693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.835721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.835902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.835930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.836113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.836138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.836276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.836303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.836364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2187390 (9): Bad file descriptor 00:34:38.445 [2024-07-23 03:34:04.836627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.836678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.836887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.836914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.837105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.837133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.837365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.837414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.837584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.837610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.837809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.445 [2024-07-23 03:34:04.837839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.445 qpair failed and we were unable to recover it. 00:34:38.445 [2024-07-23 03:34:04.838031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.838061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.838226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.838253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.838409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.838437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.838597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.838636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.838808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.838834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.839030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.839071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.839236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.839264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.839478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.839504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.839684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.839715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.839904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.839934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.840094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.840119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.840308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.840337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.840517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.840546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.840747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.840773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.840962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.840992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.841211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.841239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.841409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.841435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.841617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.841661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.841832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.841858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.842028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.842055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.842247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.842275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.842460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.842488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.842690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.842717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.842912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.842940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.843098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.843126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.843296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.843322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.843534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.843562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.843723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.843752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.843944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.843970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.844165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.844194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.844412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.844465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.844657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.844685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.844854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.844882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.845073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.845101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.845271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.845301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.845497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.845523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.845723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.446 [2024-07-23 03:34:04.845752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.446 qpair failed and we were unable to recover it. 00:34:38.446 [2024-07-23 03:34:04.845913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.845938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.846112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.846137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.846351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.846379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.846568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.846594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.846750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.846778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.846946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.846972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.847118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.847144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.847294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.847319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.847504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.847532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.847692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.847719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.847892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.847917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.848094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.848119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.848284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.848310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.848478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.848504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.848650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.848676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.848830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.848855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.849050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.849079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.849295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.849349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.849534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.849564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.849763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.849791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.850021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.850050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.850218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.850244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.850439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.850467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.850654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.850683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.850850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.850876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.851090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.851118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.851275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.851303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.851502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.851527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.851719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.851747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.851934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.851963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.852154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.852180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.852367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.852395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.852600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.852636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.852842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.852867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.853050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.853076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.853269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.853298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.853524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.447 [2024-07-23 03:34:04.853550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.447 qpair failed and we were unable to recover it. 00:34:38.447 [2024-07-23 03:34:04.853717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.853751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.853970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.853999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.854193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.854218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.854399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.854427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.854627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.854654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.854797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.854822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.854970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.854996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.855151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.855177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.855350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.855375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.855529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.855559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.855724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.855754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.855937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.855963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.856141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.856167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.856360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.856389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.856593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.856635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.856842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.856870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.857055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.857084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.857263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.857289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.857455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.857480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.857675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.857703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.857898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.857925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.858095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.858121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.858304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.858334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.858519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.858549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.858748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.858773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.858970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.858998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.859193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.859219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.859392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.859422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.859609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.859649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.859821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.859847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.860016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.860041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.860230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.860257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.448 qpair failed and we were unable to recover it. 00:34:38.448 [2024-07-23 03:34:04.860450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.448 [2024-07-23 03:34:04.860477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.860640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.860667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.860809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.860834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.861019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.861043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.861235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.861263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.861427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.861455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.861651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.861678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.861843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.861872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.862028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.862062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.862261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.862287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.862456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.862484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.862669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.862698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.862897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.862923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.863077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.863104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.863304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.863332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.863524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.863549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.863739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.863768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.863949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.863979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.864175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.864201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.864374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.864399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.864537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.864562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.864733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.864760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.864962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.864991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.865154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.865183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.865377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.865403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.865591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.865630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.865822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.865852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.866043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.866070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.866241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.866270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.866478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.866507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.866743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.866769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.866936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.866966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.867156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.867183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.867355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.867380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.867604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.867641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.867826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.867859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.868023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.449 [2024-07-23 03:34:04.868048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.449 qpair failed and we were unable to recover it. 00:34:38.449 [2024-07-23 03:34:04.868217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.868261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.868440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.868468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.868664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.868690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.868879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.868906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.869098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.869126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.869329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.869354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.869546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.869574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.869758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.869787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.869983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.870009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.870223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.870250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.870404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.870431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.870627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.870654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.870853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.870884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.871100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.871129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.871295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.871322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.871493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.871518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.871709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.871738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.871926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.871951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.872141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.872169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.872354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.872382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.872576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.872601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.872811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.872836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.872978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.873004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.873221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.873247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.873444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.873473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.873676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.873702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.873873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.873898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.874091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.874118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.874306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.874335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.874519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.874548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.874738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.874764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.874926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.874953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.875123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.875150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.875367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.875396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.875624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.875651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.875845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.875870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.450 [2024-07-23 03:34:04.876061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.450 [2024-07-23 03:34:04.876091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.450 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.876279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.876307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.876478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.876511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.876687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.876713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.876879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.876904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.877035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.877060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.877206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.877231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.877429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.877454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.877632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.877660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.877824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.877852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.878010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.878037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.878251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.878276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.878441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.878468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.878660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.878687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.878862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.878888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.879052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.879080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.879279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.879307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.879478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.879503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.879675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.879702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.879866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.879896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.880067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.880091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.880287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.880312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.880503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.880532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.880697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.880724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.880910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.880940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.881100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.881128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.881295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.881320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.881486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.881531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.881754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.881780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.881962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.881989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.882214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.882243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.882455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.882484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.882639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.882665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.882861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.882902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.883129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.883154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.883299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.883326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.883547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.883576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.451 [2024-07-23 03:34:04.883802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.451 [2024-07-23 03:34:04.883829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.451 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.884001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.884027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.884170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.884195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.884375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.884400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.884539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.884564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.884751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.884784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.884967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.884995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.885162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.885189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.885369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.885395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.885543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.885569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.885766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.885793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.885985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.886012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.886201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.886229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.886392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.886418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.886640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.886670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.886855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.886883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.887072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.887097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.887314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.887342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.887563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.887590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.887788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.887814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.888030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.888057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.888240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.888268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.888458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.888483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.888672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.888702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.888860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.888887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.889057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.889082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.889218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.889244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.889416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.889441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.889657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.889700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.889838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.889864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.890099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.890124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.890297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.890323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.890489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.452 [2024-07-23 03:34:04.890520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.452 qpair failed and we were unable to recover it. 00:34:38.452 [2024-07-23 03:34:04.890744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.890770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.890939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.890965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.891176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.891204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.891382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.891409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.891598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.891633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.891830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.891858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.892017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.892045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.892206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.892231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.892443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.892470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.892638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.892666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.892882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.892907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.893080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.893106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.893292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.893326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.893528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.893554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.893724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.893750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.893938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.893965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.894186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.894212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.894381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.894409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.894590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.894627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.894787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.894812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.894957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.894983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.895178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.895203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.895414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.895440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.895654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.895696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.895864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.895893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.896088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.896113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.896326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.896354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.896506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.896534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.896726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.896753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.896946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.896974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.897162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.897190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.897379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.897404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.897586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.897622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.897796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.897821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.898013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.898038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.898268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.898294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.898460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.453 [2024-07-23 03:34:04.898486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.453 qpair failed and we were unable to recover it. 00:34:38.453 [2024-07-23 03:34:04.898625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.898651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.898824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.898849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.899053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.899082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.899305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.899330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.899499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.899527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.899737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.899767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.899927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.899954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.900147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.900175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.900384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.900412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.900569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.900596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.900776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.900806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.900993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.901023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.901239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.901265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.901455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.901484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.901672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.901701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.901894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.901924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.902147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.902175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.902383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.902411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.902578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.902604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.902806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.902834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.903018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.903046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.903236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.903261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.903445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.903472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.903657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.903686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.903847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.903873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.904073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.904101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.904288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.904316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.904470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.904495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.904681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.904710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.904871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.904900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.905125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.905151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.905316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.905344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.905495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.905525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.905741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.905767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.905959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.905986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.906173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.906201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.906425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.906450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.454 qpair failed and we were unable to recover it. 00:34:38.454 [2024-07-23 03:34:04.906673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.454 [2024-07-23 03:34:04.906702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.906896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.906922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.907096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.907120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.907310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.907337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.907513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.907541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.907715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.907743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.907932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.907960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.908173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.908202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.908367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.908391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.908596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.908632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.908856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.908883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.909047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.909074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.909223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.909267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.909448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.909476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.909642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.909668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.909881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.909908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.910104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.910131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.910326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.910352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.910511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.910546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.910762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.910792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.910986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.911012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.911206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.911234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.911420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.911449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.911645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.911670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.911840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.911865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.912036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.912065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.912252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.912278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.912451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.912481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.912708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.912734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.912877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.912903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.913098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.913124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.913299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.913329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.913502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.913529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.913717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.913748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.913929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.913957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.914138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.914164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.914352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.914380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.455 qpair failed and we were unable to recover it. 00:34:38.455 [2024-07-23 03:34:04.914562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.455 [2024-07-23 03:34:04.914593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.914795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.914822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.915063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.915089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.915305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.915333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.915549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.915575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.915758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.915784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.915947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.915975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.916180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.916215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.916438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.916479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.916709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.916743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.916983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.917019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.917227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.917264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.917508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.917548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.917767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.917804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.918058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.918099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.918314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.918354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.918597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.918641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.918874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.918916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.919121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.919152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.919348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.919379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.919555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.919585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.919771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.919807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.919980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.920011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.920246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.920275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.920442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.920470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.920680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.920709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.920898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.920927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.921097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.921124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.921269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.921294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.921458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.921483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.921677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.921708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.921896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.921922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.922076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.922106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.922303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.922331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.922551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.922576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.922760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.922785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.922980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.923009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.456 [2024-07-23 03:34:04.923223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.456 [2024-07-23 03:34:04.923249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.456 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.923399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.923425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.923569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.923610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.923813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.923839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.923989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.924014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.924193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.924218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.924358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.924385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.924560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.924586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.924831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.924858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.925000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.925024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.925213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.925242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.925447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.925472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.925627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.925652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.925801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.925827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.926015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.926043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.926260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.926284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.926453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.926483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.926654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.926684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.926873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.926898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.927126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.927154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.927365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.927393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.927555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.927579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.927734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.927760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.927945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.927974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.928139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.928171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.928355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.928384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.928567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.928596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.928802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.928829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.929043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.929072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.929227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.929256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.929442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.929467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.929623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.457 [2024-07-23 03:34:04.929649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.457 qpair failed and we were unable to recover it. 00:34:38.457 [2024-07-23 03:34:04.929818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.929862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.930052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.930077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.930278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.930305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.930527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.930552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.930693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.930723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.930912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.930943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.931133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.931162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.931339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.931365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.931594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.931630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.931795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.931823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.931995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.932020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.932237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.932266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.932444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.932473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.932645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.932672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.932862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.932891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.933077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.933105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.933302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.933328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.933519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.933547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.933737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.933764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.933932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.933972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.934187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.934215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.934414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.934457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.934628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.934655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.934825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.934852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.935014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.935058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.935219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.935263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.935439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.935481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.935690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.935717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.935901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.935928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.936128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.936172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.936378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.936405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.936578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.936603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.936813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.936862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.937067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.937111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.937374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.937424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.937559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.937585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.937759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.458 [2024-07-23 03:34:04.937803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.458 qpair failed and we were unable to recover it. 00:34:38.458 [2024-07-23 03:34:04.938003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.938033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.938220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.938264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.938433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.938458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.938626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.938652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.938800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.938825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.938992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.939035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.939255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.939297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.939496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.939521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.939680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.939725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.939929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.939973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.940142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.940185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.940381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.940410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.940587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.940618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.940815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.940857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.941062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.941090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.941283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.941327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.941469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.941494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.941690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.941738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.941926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.941954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.942192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.942236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.942438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.942463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.942608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.942641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.942843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.942887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.943086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.943130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.943306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.943355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.943528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.943554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.943751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.943794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.943991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.944036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.944259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.944303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.944443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.944468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.944668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.944712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.944881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.944926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.945151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.945194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.945386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.945412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.945585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.945610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.945812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.945864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.946076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.946127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-23 03:34:04.946331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.459 [2024-07-23 03:34:04.946374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.946511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.946537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.946732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.946774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.946978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.947021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.947188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.947232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.947407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.947433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.947602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.947635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.947823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.947871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.948068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.948110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.948279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.948322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.948494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.948520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.948715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.948761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.948959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.948988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.949196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.949239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.949435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.949461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.949606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.949638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.949796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.949839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.950042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.950085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.950272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.950314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.950507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.950533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.950696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.950738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.950939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.950982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.951176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.951218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.951389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.951416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.951581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.951607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.951854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.951897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.952091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.952121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.952283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.952312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.952513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.952555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.952756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.952787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.952954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.952997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.953222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.953272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.953680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.953706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.953875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.953915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.954136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.954164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.954402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.954430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.954624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.954669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.954820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.460 [2024-07-23 03:34:04.954846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-23 03:34:04.955043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.955072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.955269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.955297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.955466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.955491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.955662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.955688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.955881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.955924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.956186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.956214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.956423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.956453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.956660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.956687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.956860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.956885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.957071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.957099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.957312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.957340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.957558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.957586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.957763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.957788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.957949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.957979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.958136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.958169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.958336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.958379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.958570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.958597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.958791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.958816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.959010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.959038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.959251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.959279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.959488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.959517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.959712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.959738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.959869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.959912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.960144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.960195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.960360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.960388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.960561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.960586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.960794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.960820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.961013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.961042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.961227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.961255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.961461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.961489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.961708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.961734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.961886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.961929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.962120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.962145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.962293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.962319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.962488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.962513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.962682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.962708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.962877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.962925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.461 qpair failed and we were unable to recover it. 00:34:38.461 [2024-07-23 03:34:04.963123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.461 [2024-07-23 03:34:04.963151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.963454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.963508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.963712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.963737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.963884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.963910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.964083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.964113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.964304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.964332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.964491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.964519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.964694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.964720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.964886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.964911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.965081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.965109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.965288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.965316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.965482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.965510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.965683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.965709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.965875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.965900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.966045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.966086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.966255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.966296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.966457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.966482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.966632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.966658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.966807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.966832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.967000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.967026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.967193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.967219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.967413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.967441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.967633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.967659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.967874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.967903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.968111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.968139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.968309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.968334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.968477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.968502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.968727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.968756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.968941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.968966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.969136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.969164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.969331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.969358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.969553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.969584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.969744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.969771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.969957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.969985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.462 [2024-07-23 03:34:04.970140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.462 [2024-07-23 03:34:04.970164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.462 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.970363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.970391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.970634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.970678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.970822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.970847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.971015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.971043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.971230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.971259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.971427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.971453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.971624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.971650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.971847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.971875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.972049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.972074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.972241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.972283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.972464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.972492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.463 [2024-07-23 03:34:04.972678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.463 [2024-07-23 03:34:04.972705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.463 qpair failed and we were unable to recover it. 00:34:38.747 [2024-07-23 03:34:04.972865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.747 [2024-07-23 03:34:04.972893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.747 qpair failed and we were unable to recover it. 00:34:38.747 [2024-07-23 03:34:04.973085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.747 [2024-07-23 03:34:04.973111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.747 qpair failed and we were unable to recover it. 00:34:38.747 [2024-07-23 03:34:04.973255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.747 [2024-07-23 03:34:04.973280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.747 qpair failed and we were unable to recover it. 00:34:38.747 [2024-07-23 03:34:04.973477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.747 [2024-07-23 03:34:04.973502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.747 qpair failed and we were unable to recover it. 00:34:38.747 [2024-07-23 03:34:04.973695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.747 [2024-07-23 03:34:04.973721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.747 qpair failed and we were unable to recover it. 00:34:38.747 [2024-07-23 03:34:04.973887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.747 [2024-07-23 03:34:04.973912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.747 qpair failed and we were unable to recover it. 00:34:38.747 [2024-07-23 03:34:04.974079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.747 [2024-07-23 03:34:04.974107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.747 qpair failed and we were unable to recover it. 00:34:38.747 [2024-07-23 03:34:04.974261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.747 [2024-07-23 03:34:04.974289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.747 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.974475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.974499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.974649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.974675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.974873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.974900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.975065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.975090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.975274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.975303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.975519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.975547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.975715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.975741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.975915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.975940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.976081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.976124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.976313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.976338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.976487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.976512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.976686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.976712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.976875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.976900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.977072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.977097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.977285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.977313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.977480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.977505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.977707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.977736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.977955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.977983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.978146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.978172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.978370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.978395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.978582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.978611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.978828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.978853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.979042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.979069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.979258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.979285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.979482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.979507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.979668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.979695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.979853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.979882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.980042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.980067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.980233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.980260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.980486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.980511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.980652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.980677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.980864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.980892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.981045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.981073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.981260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.981285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.981472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.981500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.981691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.981717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.748 [2024-07-23 03:34:04.981887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.748 [2024-07-23 03:34:04.981912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.748 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.982133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.982158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.982338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.982365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.982526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.982552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.982743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.982772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.982935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.982963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.983155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.983181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.983368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.983396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.983583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.983624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.983821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.983846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.984036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.984064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.984256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.984283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.984455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.984481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.984644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.984673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.984824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.984852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.985072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.985098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.985292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.985320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.985500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.985528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.985693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.985720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.985911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.985940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.986095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.986124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.986313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.986338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.986523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.986549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.986730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.986756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.987001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.987026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.987229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.987254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.987418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.987444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.987637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.987666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.987824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.987850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.988053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.988079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.988323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.988348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.988523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.988551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.988708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.988737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.988933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.988958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.989096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.989139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.989335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.989364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.989509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.989535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.989685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.989714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.749 [2024-07-23 03:34:04.989872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.749 [2024-07-23 03:34:04.989901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.749 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.990091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.990116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.990288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.990312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.990501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.990527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.990771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.990797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.990993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.991021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.991205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.991233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.991395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.991420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.991628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.991657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.991841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.991869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.992064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.992089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.992282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.992310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.992468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.992496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.992662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.992688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.992862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.992890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.993075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.993103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.993296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.993321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.993481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.993512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.993657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.993686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.993937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.993962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.994152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.994179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.994333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.994361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.994537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.994565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.994784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.994811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.995001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.995029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.995226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.995252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.995416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.995443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.995655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.995682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.995881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.995907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.996098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.996126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.996333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.996358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.996493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.996518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.996678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.996704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.996890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.996918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.997106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.997132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.997295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.997322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.997503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.997530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.997740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.997766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.750 [2024-07-23 03:34:04.997968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.750 [2024-07-23 03:34:04.997997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.750 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.998176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.998205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.998394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.998419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.998610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.998646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.998811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.998839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.999056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.999081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.999255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.999283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.999482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.999507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.999674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.999700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:04.999911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:04.999939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.000129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.000155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.000352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.000377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.000548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.000573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.000748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.000774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.000972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.000998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.001138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.001163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.001348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.001376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.001593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.001627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.001789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.001814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.002075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.002103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.002294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.002319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.002491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.002516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.002676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.002705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.002921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.002946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.003201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.003229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.003447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.003473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.003643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.003669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.003807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.003836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.003986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.004012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.004180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.004206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.004395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.004423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.004575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.004604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.004777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.004802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.004953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.004994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.005199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.005224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.005396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.005422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.005587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.005623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.005814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.751 [2024-07-23 03:34:05.005840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.751 qpair failed and we were unable to recover it. 00:34:38.751 [2024-07-23 03:34:05.005985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.006010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.006154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.006196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.006383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.006411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.006602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.006635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.006829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.006857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.007053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.007078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.007226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.007251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.007435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.007463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.007646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.007675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.007866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.007892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.008049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.008077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.008239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.008269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.008456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.008485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.008691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.008717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.008901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.008929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.009119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.009144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.009357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.009390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.009576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.009605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.009810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.009836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.010008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.010033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.010249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.010277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.010470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.010496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.010680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.010709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.010906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.010932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.011069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.011094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.011291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.011316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.011520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.011548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.011713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.011739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.011909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.011952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.012164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.012192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.752 [2024-07-23 03:34:05.012370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.752 [2024-07-23 03:34:05.012395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.752 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.012536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.012562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.012720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.012746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.012937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.012963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.013160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.013188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.013389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.013414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.013584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.013609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.013784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.013809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.014002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.014030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.014229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.014254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.014428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.014454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.014652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.014681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.014873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.014898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.015117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.015149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.015336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.015364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.015529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.015555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.015751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.015780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.015978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.016003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.016148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.016173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.016345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.016370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.016554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.016582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.016757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.016783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.016972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.017000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.017163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.017191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.017384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.017409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.017594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.017630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.017822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.017847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.018018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.018044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.018211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.018238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.018400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.018428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.018642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.018684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.018855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.018881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.019037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.019066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.019257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.019283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.019448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.019476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.019679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.019705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.019854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.019879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.020045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.020070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.753 [2024-07-23 03:34:05.020267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.753 [2024-07-23 03:34:05.020295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.753 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.020479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.020507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.020654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.020697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.020874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.020917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.021111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.021136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.021303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.021328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.021540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.021569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.021757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.021783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.021967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.021994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.022172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.022200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.022382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.022410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.022606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.022643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.022852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.022877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.023023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.023048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.023241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.023269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.023483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.023511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.023703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.023729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.023923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.023951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.024128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.024153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.024345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.024370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.024575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.024602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.024781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.024806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.024953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.024978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.025175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.025202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.025355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.025383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.025610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.025641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.025836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.025864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.026067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.026092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.026259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.026285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.026498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.026526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.026729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.026758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.026930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.026956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.027129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.027155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.027307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.027337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.027502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.027528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.027686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.027716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.027904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.027932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.028118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.754 [2024-07-23 03:34:05.028143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.754 qpair failed and we were unable to recover it. 00:34:38.754 [2024-07-23 03:34:05.028325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.028353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.028528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.028553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.028720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.028746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.028931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.028961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.029151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.029177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.029319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.029349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.029535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.029563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.029754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.029780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.029951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.029976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.030203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.030228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.030392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.030417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.030587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.030623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.030785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.030814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.031023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.031048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.031218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.031245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.031412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.031440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.031595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.031631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.031829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.031854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.032049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.032074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.032254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.032283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.032478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.032503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.032691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.032720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.032930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.032958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.033129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.033154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.033348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.033373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.033575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.033601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.033755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.033780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.033922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.033947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.034090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.034132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.034326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.034351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.034550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.034578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.034779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.034805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.034939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.034969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.035155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.035183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.035338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.035365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.035558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.035584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.035822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.035848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.036054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.755 [2024-07-23 03:34:05.036079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.755 qpair failed and we were unable to recover it. 00:34:38.755 [2024-07-23 03:34:05.036313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.036338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.036503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.036532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.036732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.036761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.036975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.037001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.037161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.037188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.037401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.037428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.037612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.037642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.037790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.037816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.038009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.038038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.038215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.038240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.038409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.038434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.038572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.038599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.038783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.038809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.038993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.039021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.039211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.039239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.039403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.039428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.039603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.039636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.039810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.039836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.039990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.040015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.040203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.040233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.040450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.040479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.040726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.040752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.040948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.040978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.041151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.041179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.041343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.041369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.041534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.041562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.041756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.041782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.041985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.042010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.042175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.042203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.042409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.042437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.042608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.042639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.042811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.042837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.043043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.043071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.043266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.043291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.043486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.043514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.043702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.043731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.043897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.043923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.756 [2024-07-23 03:34:05.044114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.756 [2024-07-23 03:34:05.044139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.756 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.044325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.044352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.044512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.044537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.044717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.044746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.044942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.044967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.045140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.045165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.045306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.045333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.045552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.045580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.045763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.045789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.045963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.045988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.046135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.046160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.046335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.046361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.046582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.046610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.046799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.046824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.046994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.047019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.047207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.047236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.047389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.047427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.047589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.047624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.047818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.047843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.048038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.048063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.048263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.048288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.048439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.048464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.048634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.048660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.048827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.048853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.049042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.049070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.049257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.049290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.049477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.049503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.049697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.049726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.049882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.049911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.050087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.050112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.050280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.050306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.050469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.050495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.757 [2024-07-23 03:34:05.050659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.757 [2024-07-23 03:34:05.050685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.757 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.050882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.050910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.051121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.051149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.051340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.051366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.051538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.051563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.051776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.051805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.052002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.052027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.052252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.052280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.052440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.052467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.052660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.052686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.052852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.052877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.053070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.053098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.053290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.053317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.053507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.053537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.053700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.053729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.053936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.053961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.054099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.054124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.054314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.054341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.054552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.054580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.054773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.054801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.054988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.055023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.055219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.055244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.055415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.055443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.055665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.055694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.055888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.055913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.056106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.056134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.056309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.056337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.056528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.056555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.056782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.056812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.057030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.057058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.057228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.057254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.057437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.057467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.057660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.057690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.057861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.057887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.058077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.058105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.058285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.058313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.058502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.058527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.058721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.058750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.758 qpair failed and we were unable to recover it. 00:34:38.758 [2024-07-23 03:34:05.058908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.758 [2024-07-23 03:34:05.058936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.059156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.059182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.059389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.059414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.059560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.059585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.059732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.059758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.059949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.059977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.060171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.060196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.060329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.060355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.060493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.060536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.060754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.060788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.060953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.060978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.061486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.061518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.061707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.061737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.061931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.061958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.062112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.062139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.062311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.062337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.062507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.062535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.062708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.062735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.062908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.062933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.063068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.063094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.063263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.063289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.063481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.063509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.063705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.063732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.063898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.063928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.064117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.064145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.064335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.064361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.064531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.064556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.064746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.064775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.064934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.064959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.065175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.065203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.065385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.065413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.065574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.065600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.065758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.065783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.065979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.066007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.066199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.066225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.066415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.066443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.066673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.066699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.066852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.066877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.759 qpair failed and we were unable to recover it. 00:34:38.759 [2024-07-23 03:34:05.067078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.759 [2024-07-23 03:34:05.067107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.067293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.067321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.067493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.067518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.067705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.067735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.067896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.067924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.068133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.068159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.068344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.068372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.068559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.068587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.068762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.068788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.068983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.069011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.069172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.069200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.069415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.069440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.069607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.069650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.069823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.069848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.070020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.070045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.070245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.070271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.070487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.070515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.070681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.070708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.070934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.070962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.071132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.071160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.071345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.071370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.071565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.071594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.071777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.071805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.071979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.072004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.072215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.072243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.072429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.072457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.072639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.072673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.072841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.072869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.073082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.073110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.073294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.073319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.073484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.073512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.073707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.073736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.073900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.073927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.074139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.074167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.074355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.074384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.074538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.074563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.074775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.074803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.075025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.075050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.760 qpair failed and we were unable to recover it. 00:34:38.760 [2024-07-23 03:34:05.075244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.760 [2024-07-23 03:34:05.075269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.075436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.075468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.075659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.075688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.075876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.075901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.076067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.076092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.076289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.076318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.076497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.076525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.076710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.076736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.076919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.076947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.077134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.077159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.077324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.077349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.077513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.077556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.077728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.077753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.077945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.077973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.078125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.078153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.078372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.078398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.078584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.078632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.078802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.078830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.079018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.079044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.079231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.079259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.079418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.079446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.079626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.079652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.079822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.079850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.080003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.080031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.080191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.080217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.080353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.080394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.080556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.080585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.080778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.080804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.080986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.081019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.081172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.081200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.081357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.081382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.081554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.761 [2024-07-23 03:34:05.081580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.761 qpair failed and we were unable to recover it. 00:34:38.761 [2024-07-23 03:34:05.081745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.081773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.081937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.081962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.082133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.082160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.082345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.082373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.082529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.082554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.082768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.082797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.082984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.083012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.083174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.083200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.083343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.083386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.083574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.083602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.083786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.083812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.083990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.084018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.084205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.084234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.084403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.084428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.084599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.084630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.084878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.084904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.085040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.085066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.085222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.085251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.085443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.085472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.085642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.085676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.085821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.085863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.086051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.086079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.086237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.086262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.086477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.086505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.086697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.086726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.086914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.086939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.087135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.087160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.087300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.087325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.087516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.087542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.087709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.087739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.087926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.087954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.088143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.088168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.088318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.088360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.088524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.088552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.088717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.088743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.088891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.088917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.089100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.089128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.089290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.089315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.762 qpair failed and we were unable to recover it. 00:34:38.762 [2024-07-23 03:34:05.089460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.762 [2024-07-23 03:34:05.089501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.089727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.089753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.089895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.089922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.090112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.090140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.090353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.090381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.090543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.090570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.090777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.090806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.090988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.091016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.091190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.091215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.091379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.091405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.091625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.091654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.091853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.091878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.092038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.092065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.092256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.092286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.092500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.092526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.092720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.092750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.092950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.092976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.093149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.093175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.093334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.093362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.093550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.093579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.093754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.093781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.093930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.093973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.094126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.094155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.094372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.094398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.094563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.094591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.094790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.094816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.094984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.095013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.095182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.095210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.095365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.095393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.095583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.095609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.095797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.095826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.095992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.096022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.096215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.096241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.096429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.096456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.096610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.096646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.096806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.096831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.097008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.097033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.097228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.097253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.763 qpair failed and we were unable to recover it. 00:34:38.763 [2024-07-23 03:34:05.097453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.763 [2024-07-23 03:34:05.097479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.097689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.097715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.097859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.097901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.098074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.098099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.098245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.098270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.098441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.098466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.098637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.098669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.098865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.098894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.099121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.099147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.099319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.099345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.099507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.099537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.099736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.099763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.099898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.099924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.100140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.100168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.100357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.100386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.100577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.100606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.100770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.100796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.100947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.100988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.101156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.101181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.101369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.101397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.101609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.101643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.101823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.101848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.102042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.102067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.102291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.102319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.102513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.102538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.102733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.102762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.102928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.102956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.103129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.103154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.103322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.103347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.103491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.103534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.103746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.103772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.103916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.103941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.104107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.104149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.104339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.104365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.104506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.104531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.104681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.104707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.104875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.104900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.764 [2024-07-23 03:34:05.105094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.764 [2024-07-23 03:34:05.105122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.764 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.105308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.105336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.105527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.105552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.105738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.105767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.105965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.105990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.106180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.106210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.106412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.106439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.106641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.106669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.106849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.106875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.107014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.107039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.107180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.107205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.107394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.107419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.107566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.107592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.107773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.107800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.107995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.108021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.108209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.108238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.108438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.108463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.108702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.108728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.108869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.108894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.109131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.109159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.109355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.109381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.109555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.109580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.109756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.109782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.109947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.109972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.110137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.110165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.110358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.110383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.110576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.110602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.110814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.110840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.111017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.111043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.111185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.111210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.111359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.111385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.111530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.111572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.111795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.111821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.112000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.112025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.112190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.112232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.112395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.112420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.112611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.112645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.112801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.112830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.765 qpair failed and we were unable to recover it. 00:34:38.765 [2024-07-23 03:34:05.113022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.765 [2024-07-23 03:34:05.113047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.113189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.113214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.113348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.113374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.113540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.113565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.113740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.113766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.113909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.113935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.114108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.114134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.114318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.114345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.114531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.114563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.114735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.114761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.114901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.114926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.115117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.115144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.115307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.115332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.115495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.115537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.115730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.115756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.115921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.115946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.116110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.116137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.116310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.116337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.116494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.116519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.116686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.116715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.116876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.116904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.117068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.117094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.117267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.117310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.117452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.117479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.117671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.117698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.117866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.117891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.118095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.118120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.118289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.118315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.118502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.118527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.118668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.118695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.118906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.118931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.119076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.119101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.766 [2024-07-23 03:34:05.119253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.766 [2024-07-23 03:34:05.119292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.766 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.119481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.119506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.119652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.119677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.119849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.119879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.120024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.120049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.120208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.120234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.120380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.120408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.120560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.120585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.120759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.120785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.120945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.120971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.121134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.121159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.121330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.121355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.121494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.121519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.121683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.121709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.121879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.121905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.122049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.122074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.122219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.122245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.122389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.122414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.122549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.122574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.122753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.122779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.122944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.122969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.123115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.123140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.123311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.123337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.123472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.123498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.123668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.123694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.123857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.123883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.124051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.124076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.124276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.124301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.124466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.124492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.124687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.124713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.124911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.124940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.125110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.125135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.125308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.125333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.125472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.125498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.125674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.125700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.125868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.125894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.126032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.126059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.126229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.126255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.126400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.767 [2024-07-23 03:34:05.126426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.767 qpair failed and we were unable to recover it. 00:34:38.767 [2024-07-23 03:34:05.126624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.126650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.126828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.126854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.127030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.127056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.127202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.127227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.127394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.127419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.127622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.127649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.127823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.127849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.128018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.128044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.128210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.128235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.128376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.128401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.128575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.128601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.128783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.128808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.128983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.129008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.129143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.129169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.129364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.129389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.129591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.129625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.129822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.129848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.130027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.130052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.130200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.130225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.130402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.130427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.130621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.130646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.130845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.130871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.131035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.131060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.131204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.131230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.131425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.131450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.131619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.131646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.131822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.131847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.132008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.132033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.132177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.132203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.132399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.132424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.132573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.132598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.132812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.132837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.133002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.133028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.133171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.133196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.133364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.133389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.133560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.133584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.133782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.133808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.133948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.768 [2024-07-23 03:34:05.133973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.768 qpair failed and we were unable to recover it. 00:34:38.768 [2024-07-23 03:34:05.134116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.134141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.134346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.134371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.134537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.134563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.134764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.134790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.134933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.134958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.135129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.135155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.135318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.135343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.135519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.135544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.135717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.135743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.135883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.135909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.136056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.136083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.136248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.136274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.136437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.136462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.136628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.136654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.136848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.136873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.137041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.137066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.137255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.137281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.137447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.137472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.137618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.137644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.137811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.137836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.137998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.138024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.138231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.138260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.138422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.138447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.138619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.138645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.138790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.138815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.139003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.139029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.139172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.139197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.139389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.139415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.139560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.139585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.139754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.139780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.139945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.139970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.140140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.140165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.140330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.140355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.140519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.140544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.140719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.140744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.140947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.140973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.141142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.141168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.141305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.769 [2024-07-23 03:34:05.141331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.769 qpair failed and we were unable to recover it. 00:34:38.769 [2024-07-23 03:34:05.141501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.141526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.141670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.141696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.141836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.141862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.142032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.142057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.142208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.142234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.142372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.142397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.142541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.142567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.142714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.142740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.142938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.142964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.143136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.143161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.143355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.143384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.143552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.143577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.143728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.143754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.143932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.143957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.144130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.144155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.144323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.144348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.144526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.144551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.144722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.144748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.144884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.144909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.145082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.145108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.145272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.145297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.145462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.145487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.145660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.145686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.145861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.145886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.146082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.146107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.146273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.146298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.146471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.146496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.146668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.146694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.146841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.146867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.147038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.147064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.147234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.147259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.147402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.147426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.147623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.147649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.147807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.147831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.147975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.148001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.148173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.148199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.148366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.148391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.148569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.148598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.770 qpair failed and we were unable to recover it. 00:34:38.770 [2024-07-23 03:34:05.148781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.770 [2024-07-23 03:34:05.148806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.148958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.148983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.149130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.149155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.149295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.149321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.149490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.149516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.149679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.149706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.149880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.149905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.150043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.150068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.150229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.150254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.150397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.150422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.150587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.150618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.150795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.150820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.150993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.151018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.151168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.151193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.151363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.151387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.151552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.151578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.151723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.151749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.151889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.151914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.152048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.152073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.152240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.152264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.152432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.152458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.152628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.152654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.152834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.152860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.153028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.153053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.153223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.153248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.153414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.153439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.153606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.153644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.153818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.153844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.154039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.154065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.154262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.154287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.154458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.154483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.154673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.154700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.154871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.771 [2024-07-23 03:34:05.154896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.771 qpair failed and we were unable to recover it. 00:34:38.771 [2024-07-23 03:34:05.155067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.155092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.155256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.155281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.155446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.155471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.155662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.155688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.155865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.155890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.156027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.156052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.156220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.156245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.156419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.156447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.156617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.156642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.156796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.156821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.156988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.157013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.157157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.157184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.157355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.157382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.157525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.157551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.157717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.157743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.157916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.157941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.158090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.158115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.158254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.158279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.158442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.158467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.158637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.158663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.158810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.158835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.159020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.159046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.159212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.159236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.159430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.159455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.159597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.159630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.159814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.159839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.159988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.160013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.160151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.160176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.160330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.160355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.160523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.160548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.160714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.160740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.160880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.160906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.161105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.161130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.161304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.161329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.161474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.161503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.161698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.161724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.161871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.161896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.162045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.772 [2024-07-23 03:34:05.162070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.772 qpair failed and we were unable to recover it. 00:34:38.772 [2024-07-23 03:34:05.162215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.162240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.162379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.162404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.162577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.162602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.162775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.162800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.162945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.162970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.163159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.163184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.163323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.163348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.163525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.163550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.163695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.163721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.163887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.163913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.164060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.164086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.164250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.164275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.164423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.164449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.164591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.164624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.164764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.164791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.164963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.164988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.165181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.165206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.165403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.165428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.165572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.165597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.165809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.165835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.165978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.166003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.166152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.166177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.166321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.166346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.166514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.166547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.166712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.166738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.166883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.166908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.167062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.167088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.167227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.167252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.167391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.167417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.167587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.167612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.167766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.167791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.167959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.167984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.168143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.168168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.168308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.168333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.168507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.168532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.168667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.168693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.168856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.168881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.169029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.773 [2024-07-23 03:34:05.169055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.773 qpair failed and we were unable to recover it. 00:34:38.773 [2024-07-23 03:34:05.169221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.169246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.169436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.169461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.169657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.169683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.169880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.169905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.170045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.170070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.170219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.170245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.170434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.170459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.170603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.170636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.170833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.170858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.171024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.171049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.171197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.171223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.171391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.171416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.171587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.171619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.171787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.171813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.171960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.171985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.172125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.172151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.172304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.172329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.172484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.172509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.172660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.172686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.172856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.172881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.173026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.173052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.173218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.173243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.173435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.173459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.173600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.173638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.173783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.173808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.173984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.174009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.174155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.174180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.174346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.174371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.174514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.174539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.174695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.174722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.174866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.174892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.175087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.175112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.175251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.175276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.175448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.175473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.175645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.175671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.175867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.175892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.176029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.176053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.774 [2024-07-23 03:34:05.176244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.774 [2024-07-23 03:34:05.176269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.774 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.176442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.176467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.176603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.176637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.176811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.176838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.177010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.177035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.177176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.177201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.177374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.177398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.177541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.177566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.177700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.177726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.177867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.177892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.178036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.178061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.178229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.178254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.178400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.178426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.178598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.178631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.178774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.178800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.178963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.178988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.179160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.179188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.179387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.179412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.179581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.179606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.179754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.179780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.179923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.179948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.180114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.180139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.180335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.180360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.180527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.180552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.180727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.180753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.180921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.180945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.181107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.181132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.181325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.181350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.181520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.181545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.181740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.181766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.181923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.181948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.775 [2024-07-23 03:34:05.182145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.775 [2024-07-23 03:34:05.182170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.775 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.182315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.182340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.182506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.182532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.182697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.182723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.182903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.182928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.183121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.183146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.183309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.183335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.183505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.183530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.183700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.183725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.183894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.183920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.184084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.184109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.184259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.184285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.184455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.184484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.184652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.184678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.184853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.184878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.185048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.185075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.185220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.185245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.185419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.185446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.185634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.185660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.185857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.185882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.186049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.186074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.186238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.186264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.186432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.186457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.186661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.186688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.186854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.186879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.187023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.187048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.187200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.187225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.187367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.187392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.187579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.187604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.187784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.187809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.187975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.188000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.188142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.188167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.188335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.188360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.188505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.188530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.188700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.188726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.188894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.188919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.189061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.189087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.189218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.189244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.776 [2024-07-23 03:34:05.189442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.776 [2024-07-23 03:34:05.189468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.776 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.189634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.189661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.189811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.189837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.190009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.190038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.190188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.190213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.190412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.190437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.190576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.190601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.190781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.190806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.190952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.190978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.191125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.191150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.191329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.191354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.191541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.191566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.191745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.191770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.191943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.191968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.192162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.192187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.192352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.192377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.192519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.192544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.192686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.192711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.192904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.192929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.193076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.193101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.193272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.193297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.193438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.193463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.193609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.193648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.193843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.193868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.194060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.194085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.194229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.194254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.194398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.194423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.194592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.194625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.194792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.194817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.194989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.195015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.195181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.195206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.195346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.195371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.195506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.195531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.195730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.195756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.195924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.195949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.196123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.196149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.196310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.196335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.196474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.196499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.196669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.777 [2024-07-23 03:34:05.196695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.777 qpair failed and we were unable to recover it. 00:34:38.777 [2024-07-23 03:34:05.196849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.196874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.197017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.197042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.197182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.197207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.197373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.197402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.197559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.197584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.197757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.197783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.197949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.197974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.198137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.198162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.198324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.198348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.198512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.198537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.198705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.198732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.198896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.198921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.199059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.199086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.199255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.199280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.199450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.199476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.199644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.199670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.199818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.199844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.200018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.200044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.200217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.200242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.200415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.200440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.200582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.200608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.200784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.200809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.200950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.200975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.201116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.201143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.201307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.201333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.201473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.201499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.201665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.201692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.201863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.201888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.202056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.202081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.202249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.202275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.202451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.202481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.202625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.202651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.202818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.202843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.203017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.203042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.203188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.203213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.203384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.203409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.203583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.203608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.778 [2024-07-23 03:34:05.203783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.778 [2024-07-23 03:34:05.203808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.778 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.204005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.204030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.204175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.204200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.204343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.204368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.204517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.204543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.204716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.204741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.204913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.204938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.205092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.205118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.205266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.205292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.205472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.205498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.205648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.205674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.205821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.205846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.205984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.206009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.206156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.206182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.206316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.206341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.206487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.206513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.206682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.206708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.206852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.206877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.207072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.207097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.207244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.207268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.207404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.207434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.207577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.207602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.207755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.207780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.207921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.207947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.208142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.208168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.208312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.208337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.208471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.208496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.208645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.208672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.208820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.208844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.209012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.209037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.209178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.209203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.209340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.209365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.209510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.209535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.209700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.209726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.209895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.209920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.210092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.210117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.210260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.210285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.210450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.210474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.210637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.779 [2024-07-23 03:34:05.210663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.779 qpair failed and we were unable to recover it. 00:34:38.779 [2024-07-23 03:34:05.210855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.210880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.211031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.211056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.211199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.211224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.211363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.211388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.211566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.211591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.211747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.211773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.211936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.211962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.212107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.212134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.212282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.212307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.212492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.212517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.212656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.212682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.212875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.212900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.213067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.213092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.213239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.213265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.213430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.213456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.213597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.213630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.213802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.213827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.214022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.214048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.214192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.214217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.214368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.214393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.214534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.214561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.214704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.214730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.214884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.214908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.215074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.215099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.215269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.215296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.215462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.215488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.215655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.215682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.215833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.215859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.216028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.216055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.216200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.216226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.216396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.216423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.216569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.780 [2024-07-23 03:34:05.216596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.780 qpair failed and we were unable to recover it. 00:34:38.780 [2024-07-23 03:34:05.216741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.216768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.216907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.216934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.217072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.217098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.217267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.217295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.217464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.217491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.217637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.217666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.217832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.217859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.218002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.218030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.218197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.218224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.218418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.218446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.218641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.218669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.218835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.218862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.219009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.219036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.219212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.219238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.219403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.219429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.219601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.219635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.219832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.219859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.220034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.220066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.220267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.220293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.220445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.220471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.220608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.220642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.220838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.220865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.221012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.221038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.221186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.221213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.221386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.221412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.221579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.221605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.221764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.221792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.221967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.221994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.222134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.222161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.222315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.222342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.222509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.222536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.222684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.222712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.222857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.222884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.223028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.223055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.223201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.223228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.223398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.223425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.223624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.223651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.223787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.223814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.781 qpair failed and we were unable to recover it. 00:34:38.781 [2024-07-23 03:34:05.223986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.781 [2024-07-23 03:34:05.224013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.224195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.224221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.224367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.224395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.224558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.224585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.224737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.224764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.224961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.224987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.225155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.225186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.225329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.225356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.225526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.225553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.225697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.225725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.225861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.225888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.226056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.226083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.226220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.226247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.226438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.226465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.226610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.226645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.226820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.226847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.227040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.227067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.227218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.227245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.227413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.227440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.227620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.227647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.227802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.227830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.227971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.228002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.228147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.228173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.228369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.228395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.228562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.228589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.228769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.228797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.228964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.228991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.229138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.229165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.229336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.229363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.229506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.229533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.229695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.229722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.229860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.229886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.230059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.230085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.230246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.230273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.230420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.230448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.230622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.230650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.230829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.230856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.230991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.231017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.782 [2024-07-23 03:34:05.231188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.782 [2024-07-23 03:34:05.231215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.782 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.231396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.231423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.231573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.231599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.231748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.231776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.231939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.231966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.232163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.232190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.232355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.232382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.232533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.232560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.232733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.232760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.232960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.233004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.233160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.233189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.233389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.233416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.233583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.233610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.233797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.233824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.233955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.233982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.234156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.234184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.234383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.234409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.234583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.234610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.234794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.234821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.234993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.235020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.235215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.235243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.235421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.235448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.235643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.235681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.235850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.235878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.236087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.236113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.236304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.236331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.236481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.236507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.236661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.236689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.236842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.236869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.237016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.237042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.237186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.237212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.237364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.237391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.237585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.237617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.237755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.237782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.237973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.238000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.238170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.238198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.238376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.238403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.238596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.238628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.783 [2024-07-23 03:34:05.238802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.783 [2024-07-23 03:34:05.238829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.783 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.238972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.238998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.239147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.239173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.239353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.239380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.239519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.239546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.239692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.239719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.239917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.239944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.240086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.240114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.240310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.240337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.240538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.240565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.240722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.240750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.240920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.240948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.241143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.241170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.241339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.241366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.241535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.241564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.241747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.241774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.241970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.241997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.242190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.242217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.242361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.242387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.242526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.242552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.242722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.242750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.242923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.242950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.243142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.243169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.243370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.243397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.243543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.243573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.243755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.243783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.243958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.243986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.244128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.244155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.244340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.244366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.244544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.244571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.244724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.244751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.244912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.244939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.245110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.245139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.245284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.245311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.245484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.245510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.245694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.245722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.245896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.245923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.246070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.784 [2024-07-23 03:34:05.246097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.784 qpair failed and we were unable to recover it. 00:34:38.784 [2024-07-23 03:34:05.246249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.246275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.246469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.246496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.246670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.246697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.246868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.246894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.247096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.247123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.247282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.247309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.247476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.247502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.247657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.247685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.247848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.247875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.248045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.248072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.248227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.248254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.248457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.248484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.248674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.248701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.248877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.248904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.249072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.249099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.249293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.249320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.249496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.249523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.249720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.249748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.249915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.249941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.250107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.250134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.250307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.250334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.250520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.250548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.250716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.250744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.250893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.250921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.251096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.251123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.251293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.251320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.251490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.251521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.251664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.251692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.251865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.251892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.252063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.252091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.252255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.252281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.252449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.252477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.785 [2024-07-23 03:34:05.252629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.785 [2024-07-23 03:34:05.252658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.785 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.252811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.252838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.253036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.253063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.253235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.253263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.253427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.253453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.253640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.253669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.253819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.253846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.254042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.254069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.254268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.254295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.254448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.254476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.254625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.254653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.254846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.254873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.255051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.255078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.255250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.255277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.255474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.255501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.255673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.255701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.255851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.255879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.256021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.256048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.256247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.256274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.256470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.256496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.256674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.256702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.256850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.256877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.257073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.257100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.257348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.257375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.257570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.257598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.257784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.257811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.257972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.258000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.258142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.258170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.258341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.258368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.258564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.258591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.258768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.258797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.258969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.258996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.259171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.259198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.259368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.259395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.259558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.259590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.259741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.259769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.259937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.259964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.260157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.786 [2024-07-23 03:34:05.260184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.786 qpair failed and we were unable to recover it. 00:34:38.786 [2024-07-23 03:34:05.260353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.260381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.260636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.260665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.260834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.260862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.261053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.261080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.261326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.261353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.261520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.261547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.261797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.261824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.261994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.262020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.262218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.262244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.262407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.262434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.262608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.262642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.262808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.262835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.263003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.263029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.263236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.263263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.263435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.263462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.263608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.263642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.263813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.263840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.264035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.264063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.264235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.264262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.264438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.264465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.264638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.264665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.264913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.264941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.265115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.265141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.265308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.265339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.265493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.265521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.265678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.265705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.265873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.265899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.266043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.266070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.266277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.266303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.266471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.266499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.266651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.266679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.266873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.266900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.267092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.267119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.267258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.267285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.267451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.267478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.267628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.267655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.267797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.267824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.787 [2024-07-23 03:34:05.268020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.787 [2024-07-23 03:34:05.268047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.787 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.268243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.268270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.268440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.268467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.268643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.268671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.268842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.268869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.269019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.269046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.269210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.269237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.269431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.269459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.269631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.269658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.269857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.269885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.270055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.270084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.270256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.270283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.270419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.270447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.270629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.270656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.270859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.270886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.271063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.271090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.271254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.271280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.271421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.271448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.271698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.271725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.271873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.271900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.272047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.272074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.272219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.272246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.272437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.272464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.272642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.272669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.272843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.272871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.273018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.273046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.273191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.273225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.273391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.273418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.273559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.273586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.273784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.273812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.273982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.274009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.274181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.274207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.274373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.274400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.274570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.274596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.274772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.274799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.274970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.274997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.275141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.275170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.275420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.275448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.788 qpair failed and we were unable to recover it. 00:34:38.788 [2024-07-23 03:34:05.275617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.788 [2024-07-23 03:34:05.275645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.275785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.275812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.275985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.276013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.276209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.276235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.276407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.276433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.276629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.276656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.276803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.276830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.277002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.277029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.277198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.277225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.277376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.277403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.277573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.277600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.277786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.277814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.278008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.278034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.278185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.278212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.278380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.278407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.278580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.278607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.278782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.278809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.278975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.279001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.279154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.279182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.279378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.279406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.279623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.279650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.279821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.279848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.280043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.280070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.280220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.280247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.280444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.280471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.280671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.280699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.280870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.280897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.281048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.281075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.281271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.281302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.281475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.281504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.281652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.281680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.281847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.281874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.789 qpair failed and we were unable to recover it. 00:34:38.789 [2024-07-23 03:34:05.282046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.789 [2024-07-23 03:34:05.282073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.282244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.282271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.282420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.282447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.282623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.282651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.282816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.282842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.283033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.283060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.283228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.283255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.283439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.283466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.283660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.283688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.283862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.283888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.284038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.284064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.284268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.284295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.284464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.284491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.284665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.284692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.284862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.284889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.285058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.285085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.285226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.285253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.285419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.285446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.285623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.285651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.285802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.285829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.286020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.286046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.286197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.286224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.286364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.286391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.286534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.286563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.286742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.286770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.286911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.286938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.287114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.287140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.287287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.287314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.287511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.287538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.287705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.287732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.287910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.287937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.288109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.288137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.288312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.288339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.288511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.288538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.288681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.288708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.288882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.288909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.289089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.289120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.289292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.790 [2024-07-23 03:34:05.289319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.790 qpair failed and we were unable to recover it. 00:34:38.790 [2024-07-23 03:34:05.289509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.289536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.289708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.289735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.289905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.289933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.290109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.290136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.290302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.290329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.290509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.290535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.290711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.290738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.290937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.290964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.291139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.291166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.291340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.291367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.291563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.291590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.291774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.291802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.291954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.291981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.292153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.292180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.292351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.292378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.292547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.292574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.292754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.292783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.292966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.292993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.293134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.293161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.293333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.293360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.293506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.293534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.293739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.293767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.293943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.293971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.294137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.294163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.294336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.294363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.294534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.294561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.294736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.294764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.294958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.294985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.295134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.295161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.295331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.295358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.295506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.295534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.295705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.295733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.295904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.295931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.296100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.296128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.296296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.296322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.296461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.296489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.296662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.296690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.296863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.296890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.791 [2024-07-23 03:34:05.297034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.791 [2024-07-23 03:34:05.297066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.791 qpair failed and we were unable to recover it. 00:34:38.792 [2024-07-23 03:34:05.297209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.792 [2024-07-23 03:34:05.297237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:38.792 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.297402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.297429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.297570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.297598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.297757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.297784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.297934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.297962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.298104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.298131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.298303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.298331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.298481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.298508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.298671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.298699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.298841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.298868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.299052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.299079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.299277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.299304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.299454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.299482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.299636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.299664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.299805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.299832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.300007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.300034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.300183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.300211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.300357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.300384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.074 [2024-07-23 03:34:05.300530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.074 [2024-07-23 03:34:05.300559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.074 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.300738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.300766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.300937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.300964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.301137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.301165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.301317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.301345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.301514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.301541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.301719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.301746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.301917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.301945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.302121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.302149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.302316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.302344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.302489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.302516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.302684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.302712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.302909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.302936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.303078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.303106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.303280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.303308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.303448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.303475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.303650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.303678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.303850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.303877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.304038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.304065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.304245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.304272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.304466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.304493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.304649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.304682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.304856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.304884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.305066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.305093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.305264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.305292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.305441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.305468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.305620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.305647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.305820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.305847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.305997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.306024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.306196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.306224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.306415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.306442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.306585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.306618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.306768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.306796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.306970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.306999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.307134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.307162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.307335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.307362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.307504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.307532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.307706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.075 [2024-07-23 03:34:05.307734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.075 qpair failed and we were unable to recover it. 00:34:39.075 [2024-07-23 03:34:05.307873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.307900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.308068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.308095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.308246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.308274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.308444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.308471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.308620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.308648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.308816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.308843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.309041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.309067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.309208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.309236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.309411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.309438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.309589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.309631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.309810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.309839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.310012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.310040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.310224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.310251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.310444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.310471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.310645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.310683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.310852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.310879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.311077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.311104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.311273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.311300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.311444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.311471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.311655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.311683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.311888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.311915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.312074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.312102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.312276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.312303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.312477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.312510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.312679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.312707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.312879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.312908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.313057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.313086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.313253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.313280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.313421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.313449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.313647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.313674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.313850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.313877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.314053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.314081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.314222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.314249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.314397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.314424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.314622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.314650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.314820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.314847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.315015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.315042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.076 [2024-07-23 03:34:05.315188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.076 [2024-07-23 03:34:05.315216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.076 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.315390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.315418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.315569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.315597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.315769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.315796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.315959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.315986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.316185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.316212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.316382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.316410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.316579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.316606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.316764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.316791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.316953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.316981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.317174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.317201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.317340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.317368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.317540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.317567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.317756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.317783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.317982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.318009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.318153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.318181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.318326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.318353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.318496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.318523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.318694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.318722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.318893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.318920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.319113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.319140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.319284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.319312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.319481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.319508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.319677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.319705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.319847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.319875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.320024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.320050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.320218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.320249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.320395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.320423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.320608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.320642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.320793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.320820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.321018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.321045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.321187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.321215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.321415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.321442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.321640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.321668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.321837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.321865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.322070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.322097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.322267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.322293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.322457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.077 [2024-07-23 03:34:05.322484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.077 qpair failed and we were unable to recover it. 00:34:39.077 [2024-07-23 03:34:05.322657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.322684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.322852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.322881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.323061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.323089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.323258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.323286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.323459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.323487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.323655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.323682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.323830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.323857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.324002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.324031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.324196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.324223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.324390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.324416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.324591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.324624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.324795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.324822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.324961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.324987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.325135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.325162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.325366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.325393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.325570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.325597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.325766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.325793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.325963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.325991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.326163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.326192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.326334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.326363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.326537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.326565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.326717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.326745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.326911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.326938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.327107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.327134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.327305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.327332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.327501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.327529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.327698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.327727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.327928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.327955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.328128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.328158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.328340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.328366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.328513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.328540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.328710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.328737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.328914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.328940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.329108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.329135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.329328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.329355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.329527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.329554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.329724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.329752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.329918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.078 [2024-07-23 03:34:05.329944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.078 qpair failed and we were unable to recover it. 00:34:39.078 [2024-07-23 03:34:05.330115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.330144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.330309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.330336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.330508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.330535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.330705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.330734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.330909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.330938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.331081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.331108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.331278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.331306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.331449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.331477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.331645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.331674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.331854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.331882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.332054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.332085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.332282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.332310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.332504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.332530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.332699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.332727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.332899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.332927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.333126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.333152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.333322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.333351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.333520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.333548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.333718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.333746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.333925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.333952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.334131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.334158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.334337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.334364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.334514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.334540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.334706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.334746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.334892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.334920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.335061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.335088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.335253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.335279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.335421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.335450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.335623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.335652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.335826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.079 [2024-07-23 03:34:05.335853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.079 qpair failed and we were unable to recover it. 00:34:39.079 [2024-07-23 03:34:05.336054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.336085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.336257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.336285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.336437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.336464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.336625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.336652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.336803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.336830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.336989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.337016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.337186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.337213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.337413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.337440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.337582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.337610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.337800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.337827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.337979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.338006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.338183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.338211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.338397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.338424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.338626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.338654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.338835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.338862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.339038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.339064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.339259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.339286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.339455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.339482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.339675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.339703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.339875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.339901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.340070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.340096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.340272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.340298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.340495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.340521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.340703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.340731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.340891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.340917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.341089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.341115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.341263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.341291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.341492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.341519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.341666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.341694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.341833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.341859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.342006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.342034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.342208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.342235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.342404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.342431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.342624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.342652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.342818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.342845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.343040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.343067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.343245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.343272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.080 [2024-07-23 03:34:05.343438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.080 [2024-07-23 03:34:05.343465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.080 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.343617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.343645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.343807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.343834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.343982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.344013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.344184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.344211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.344364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.344391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.344590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.344621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.344870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.344898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.345040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.345067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.345235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.345263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.345431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.345458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.345604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.345647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.345844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.345871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.346046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.346073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.346239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.346267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.346408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.346435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.346600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.346633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.346809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.346838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.347011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.347039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.347211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.347239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.347445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.347472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.347716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.347744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.347916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.347943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.348138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.348165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.348332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.348359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.348526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.348553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.348708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.348736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.348906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.348933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.349082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.349110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.349252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.349280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.349482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.349509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.349660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.349689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.349830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.349857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.349997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.350025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.350198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.350227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.350375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.350402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.350547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.350574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.350753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.081 [2024-07-23 03:34:05.350781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.081 qpair failed and we were unable to recover it. 00:34:39.081 [2024-07-23 03:34:05.350953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.350980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.351119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.351146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.351347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.351374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.351571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.351599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.351754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.351783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.351980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.352013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.352169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.352206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.352355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.352384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.352637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.352664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.352836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.352863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.353020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.353048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.353218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.353246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.353416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.353445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.353587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.353630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.353800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.353828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.354019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.354046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.354217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.354243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.354395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.354423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.354620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.354647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.354793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.354822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.354995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.355023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.355195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.355222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.355418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.355446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.355644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.355672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.355841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.355869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.356036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.356063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.356211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.356238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.356406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.356434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.356576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.356603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.356777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.356805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.356972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.356999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.357150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.357178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.357347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.357375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.357574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.357601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.357771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.357799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.357973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.358000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.358147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.358174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.082 [2024-07-23 03:34:05.358312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.082 [2024-07-23 03:34:05.358339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.082 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.358474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.358501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.358698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.358726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.358869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.358897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.359068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.359096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.359267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.359295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.359469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.359497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.359680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.359708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.359858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.359891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.360064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.360091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.360270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.360297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.360466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.360493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.360638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.360666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.360838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.360865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.361065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.361092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.361258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.361285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.361533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.361560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.361753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.361781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.361945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.361972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.362117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.362144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.362312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.362339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.362511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.362537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.362743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.362771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.362965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.362992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.363166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.363193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.363349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.363376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.363524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.363551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.363749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.363777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.363923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.083 [2024-07-23 03:34:05.363950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.083 qpair failed and we were unable to recover it. 00:34:39.083 [2024-07-23 03:34:05.364146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.364172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.364347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.364373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.364622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.364650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.364812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.364839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.365039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.365065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.365241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.365269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.365415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.365447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.365636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.365665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.365860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.365887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.366082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.366109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.366252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.366279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.366425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.366452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.366624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.366653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.366805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.366833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.367030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.367057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.367196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.367223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.367398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.367426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.367577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.367605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.367788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.367816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.367983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.368010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.368164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.368193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.368337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.368365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.368563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.368591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.368804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.368832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.369029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.369056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.369201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.369229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.084 [2024-07-23 03:34:05.369424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.084 [2024-07-23 03:34:05.369451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.084 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.369595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.369627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.369771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.369799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.369944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.369971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.370122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.370149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.370295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.370323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.370469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.370497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.370647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.370675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.370824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.370852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.371025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.371053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.371217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.371244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.371438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.371466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.371664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.371692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.371887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.371913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.372080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.372107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.372270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.372297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.372469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.372496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.372660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.372688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.372834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.372861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.373035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.373062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.373215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.373245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.373389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.373417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.373589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.373632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.373774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.373802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.373997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.374025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.374183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.374210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.374382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.085 [2024-07-23 03:34:05.374409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.085 qpair failed and we were unable to recover it. 00:34:39.085 [2024-07-23 03:34:05.374579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.374606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.374808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.374835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.375030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.375057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.375228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.375256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.375433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.375460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.375635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.375662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.375827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.375855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.376056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.376083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.376257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.376284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.376459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.376485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.376679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.376706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.376847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.376874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.377047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.377074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.377210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.377237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.377409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.377437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.377608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.377641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.377793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.377820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.378002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.378029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.378204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.378232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.378381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.378409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.378583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.378610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.378756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.378783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.378958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.378986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.379165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.379193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.379361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.379388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.379555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.086 [2024-07-23 03:34:05.379582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.086 qpair failed and we were unable to recover it. 00:34:39.086 [2024-07-23 03:34:05.379763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.379791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.379985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.380012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.380204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.380231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.380406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.380433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.380602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.380637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.380811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.380838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.381005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.381032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.381203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.381234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.381483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.381510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.381683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.381711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.381957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.381984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.382187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.382214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.382459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.382486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.382651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.382679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.382824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.382851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.383020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.383046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.383227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.383254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.383399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.383426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.383591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.383623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.383790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.383817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.384011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.384038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.384187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.384215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.384394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.384422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.384588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.384619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.384795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.384822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.384986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.087 [2024-07-23 03:34:05.385013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.087 qpair failed and we were unable to recover it. 00:34:39.087 [2024-07-23 03:34:05.385181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.385207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.385360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.385388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.385588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.385632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.385833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.385861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.386031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.386058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.386202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.386230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.386399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.386427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.386623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.386650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.386830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.386857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.386994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.387021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.387219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.387246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.387419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.387446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.387588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.387620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.387767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.387796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.387995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.388023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.388217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.388244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.388415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.388441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.388579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.388608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.388787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.388814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.388963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.388991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.389239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.389266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.389440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.389471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.389647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.389675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.389820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.389847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.390058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.390086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.390264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.390291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.390489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.088 [2024-07-23 03:34:05.390516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.088 qpair failed and we were unable to recover it. 00:34:39.088 [2024-07-23 03:34:05.390687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.390716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.390886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.390921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.391118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.391145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.391321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.391348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.391493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.391521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.391693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.391721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.391917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.391944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.392108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.392136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.392318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.392346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.392546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.392573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.392780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.392808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.392970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.392997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.393167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.393195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.393364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.393392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.393561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.393588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.393742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.393770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.393961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.393988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.394156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.394183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.394351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.394378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.394573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.394600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.394753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.394780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.394933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.394960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.395102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.395130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.395276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.395302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.395469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.395497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.395650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.395677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.395821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.395848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.396045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.396072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.396267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.396294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.089 [2024-07-23 03:34:05.396460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.089 [2024-07-23 03:34:05.396487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.089 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.396657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.396685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.396885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.396913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.397158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.397185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.397390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.397417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.397558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.397589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.397759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.397787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.397956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.397983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.398153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.398180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.398375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.398402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.398591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.398626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.398771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.398799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.398989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.399017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.399212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.399239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.399414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.399441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.399581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.399609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.399816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.399844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.400007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.400033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.400202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.400229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.400400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.400427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.400577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.400604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.400786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.400813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.400948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.400975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.401138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.401165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.401338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.401365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.401535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.401562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.401733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.401762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.401908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.401935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.402077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.402104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.402272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.402300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.402467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.402495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.402665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.090 [2024-07-23 03:34:05.402693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.090 qpair failed and we were unable to recover it. 00:34:39.090 [2024-07-23 03:34:05.402847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.402875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.403051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.403077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.403227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.403254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.403392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.403421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.403575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.403603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.403776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.403804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.403945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.403973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.404151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.404178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.404351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.404378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.404552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.404579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.404762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.404790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.404967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.404995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.405168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.405195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.405363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.405394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.405562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.405589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.405770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.405798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.405968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.405995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.406187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.406214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.406389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.406416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.406582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.406609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.406786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.406814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.406998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.407025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.407193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.407220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.407356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.407383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.407533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.407558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.407730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.407758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.407894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.407921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.408099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.408126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.408297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.408325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.408493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.408521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.408729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.408757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.408929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.408956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.409152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.409179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.409329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.409356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.409523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.409550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.409716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.409744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.409939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.409966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.091 qpair failed and we were unable to recover it. 00:34:39.091 [2024-07-23 03:34:05.410164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.091 [2024-07-23 03:34:05.410191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.410386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.410413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.410584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.410611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.410802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.410830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.410974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.411003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.411201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.411229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.411366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.411393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.411558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.411585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.411738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.411766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.411937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.411964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.412136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.412164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.412366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.412393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.412596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.412628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.412803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.412831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.413027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.413054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.413238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.413265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.413412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.413454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.413632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.413661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.413859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.413886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.414057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.414083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.414265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.414292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.414472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.414499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.414669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.414697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.414834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.414861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.415009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.415038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.415230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.415256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.415461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.415489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.415641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.415669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.415836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.415863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.416026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.416053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.416223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.416250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.416425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.416452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.416626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.416655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.416834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.416860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.417026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.417052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.417255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.417282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.417421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.417448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.092 [2024-07-23 03:34:05.417593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.092 [2024-07-23 03:34:05.417628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.092 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.417801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.417829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.417981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.418008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.418155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.418182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.418355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.418381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.418552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.418580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.418772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.418801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.418972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.418999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.419168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.419195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.419335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.419362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.419559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.419586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.419760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.419787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.419969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.419996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.420185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.420211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.420389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.420415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.420578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.420605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.420800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.420827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.421003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.421030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.421201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.421228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.421426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.421457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.421605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.421638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.421841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.421867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.422043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.422070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.422238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.422265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.422459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.422486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.422636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.422663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.422834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.422861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.423061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.423089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.423253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.423280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.423421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.423448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.423651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.423678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.423878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.423905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.424102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.093 [2024-07-23 03:34:05.424129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.093 qpair failed and we were unable to recover it. 00:34:39.093 [2024-07-23 03:34:05.424328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.424355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.424520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.424547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.424719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.424746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.424886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.424913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.425090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.425116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.425311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.425338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.425505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.425533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.425674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.425701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.425849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.425888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.426050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.426077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.426226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.426253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.426426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.426453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.426657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.426685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.426836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.426874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.427057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.427084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.427253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.427279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.427446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.427472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.427631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.427670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.427867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.427894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.428058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.428085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.428235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.428262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.428404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.428432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.428607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.428639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.428834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.428873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.429047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.429074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.429256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.429283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.094 qpair failed and we were unable to recover it. 00:34:39.094 [2024-07-23 03:34:05.429455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.094 [2024-07-23 03:34:05.429485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.429638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.429674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.429849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.429881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.430049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.430076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.430218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.430245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.430393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.430421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.430590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.430624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.430776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.430804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.430981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.431008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.431205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.431231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.431408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.431434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.431623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.431651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.431821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.431847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.432032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.432059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.432247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.432285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.432432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.432459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.432637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.432665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.432864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.432891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.433043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.433069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.433240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.433267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.095 qpair failed and we were unable to recover it. 00:34:39.095 [2024-07-23 03:34:05.433411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.095 [2024-07-23 03:34:05.433437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.433609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.433642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.434487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.434517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.434698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.434726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.434889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.434917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.435112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.435140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.435313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.435340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.435515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.435542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.435715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.435742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.435928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.435955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.436127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.436153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.436323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.436350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.436523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.436551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.436696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.436724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.436871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.436905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.437076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.437103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.437271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.437298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.437441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.437468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.437620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.437649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.437824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.437851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.438034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.438066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.438215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.438245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.438416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.438443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.438607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.438640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.438806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.438832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.439014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.439042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.096 [2024-07-23 03:34:05.439215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.096 [2024-07-23 03:34:05.439242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.096 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.439417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.439444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.439620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.439648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.439799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.439827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.440030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.440057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.440203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.440231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.440413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.440440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.440620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.440648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.440827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.440854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.441035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.441063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.441261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.441289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.441509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.441535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.441726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.441753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.441904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.441932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.442077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.442104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.442254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.442282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.442431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.442458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.442636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.442675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.442816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.442843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.443023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.443050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.443196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.443223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.443423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.443466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.443627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.443668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.443838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.443865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.444045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.444072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.444237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.444264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5010000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.444435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.444478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.444670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.444698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.444844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.444872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.445063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.445090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.445289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.445320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.097 [2024-07-23 03:34:05.445468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.097 [2024-07-23 03:34:05.445494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.097 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.445664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.445692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.445867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.445894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.446068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.446098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.446240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.446267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.446479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.446505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.446675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.446702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.446849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.446876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.447056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.447084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.447231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.447259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.447432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.447460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.447602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.447635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.447789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.447816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.447990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.448018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.448203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.448229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.448384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.448411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.448553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.448579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.448757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.448798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.448951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.448978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.449156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.449184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.449327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.449353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.449525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.449557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.449718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.449745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.449896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.449923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.450093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.450120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.450269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.450296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.450470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.450498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.450673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.450701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.450840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.450866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.451035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.451062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.451234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.451265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.452157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.452186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.452357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.452384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.452569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.452597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.452775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.452803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.452968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.452996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.453169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.453196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.453365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.453392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.098 [2024-07-23 03:34:05.453543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.098 [2024-07-23 03:34:05.453570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.098 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.453745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.453774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.453919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.453946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.454118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.454144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.454336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.454371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.454547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.454574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.454736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.454763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.454918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.454955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.455104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.455133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.455304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.455332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.455505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.455532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.455730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.455756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.455924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.455950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.456118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.456144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.456315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.456341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.456513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.456540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.456739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.456765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.456917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.456946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.457120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.457156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.457355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.457381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.457527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.457553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.457703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.457731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.457869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.457905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.458078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.458105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.458307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.458333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.458477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.458503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.458701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.458728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.458878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.458904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.459079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.459105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.459277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.459303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.459468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.459496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.459687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.459728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.459890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.459923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.460115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.460142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.460319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.460346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.460498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.460525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.460724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.460752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.460905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.460932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.099 qpair failed and we were unable to recover it. 00:34:39.099 [2024-07-23 03:34:05.461072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.099 [2024-07-23 03:34:05.461100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.461269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.461301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.461516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.461542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.461682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.461710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.461856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.461890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.462069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.462096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.462252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.462279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.462461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.462489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.462678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.462706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.462881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.462909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.463054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.463081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.463222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.463249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.463424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.463461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.463631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.463670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.463809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.463836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.464045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.464072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.464247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.464273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.464417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.464444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.464623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.464656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.464799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.464826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.464994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.465020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.465195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.465226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.465427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.465453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.466352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.466392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.466571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.466599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.466791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.466817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.466985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.467014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.467186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.467212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.467404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.467430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.467578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.467605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.467770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.467797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.467942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.100 [2024-07-23 03:34:05.467969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.100 qpair failed and we were unable to recover it. 00:34:39.100 [2024-07-23 03:34:05.468123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.468149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.468312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.468347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.468495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.468522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.468725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.468753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.468896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.468924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.469116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.469144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.469326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.469353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.469535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.469563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.469735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.469762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.469929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.469956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.470128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.470155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.470325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.470358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.470528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.470555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.470727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.470754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.470886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.470912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.471084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.471110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.471249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.471275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.471429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.471457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.471636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.471669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.471840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.471877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.472070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.472097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.472270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.472296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.472508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.472535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.472688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.472716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.472889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.472917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.473124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.473150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.473298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.101 [2024-07-23 03:34:05.473325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.101 qpair failed and we were unable to recover it. 00:34:39.101 [2024-07-23 03:34:05.473492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.473518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.473693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.473719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.473892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.473919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.474088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.474115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.474268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.474295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.474441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.474468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.474606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.474638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.474814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.474840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.474990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.475017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.475185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.475220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.476039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.476079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.476291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.476319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.476459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.476498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.476678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.476706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.476839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.476866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.477042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.477069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.477224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.477251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.477399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.477434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.477609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.477643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.477816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.477842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.477983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.478009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.478172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.478199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.478394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.478420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.478604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.478636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.478842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.478880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.479056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.479082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.102 [2024-07-23 03:34:05.479236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.102 [2024-07-23 03:34:05.479263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.102 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.479439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.479465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.479639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.479666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.479817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.479844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.480016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.480046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.480241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.480267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.480411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.480438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.480617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.480644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.480790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.480816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.481032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.481059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.481208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.481234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.481374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.481401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.481570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.481597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.481790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.481817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.481979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.482006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.482175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.482202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.482372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.482398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.482568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.482594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.482783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.482810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.482982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.483008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.483186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.483212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.483389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.483415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.483606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.483638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.483795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.483821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.483976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.484003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.484168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.484201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.484387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.103 [2024-07-23 03:34:05.484414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.103 qpair failed and we were unable to recover it. 00:34:39.103 [2024-07-23 03:34:05.484631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.484670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.484848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.484885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.485082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.485109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.485304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.485330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.485514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.485546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.485696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.485723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.485891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.485918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.486100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.486126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.486275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.486310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.486455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.486492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.486697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.486724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.486868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.486894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.487081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.487107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.487248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.487274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.487449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.487477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.487617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.487644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.487824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.487850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.487990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.488017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.488216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.488243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.488412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.488451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.488622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.488649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.488822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.488849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.489029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.489055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.489225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.489251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.489397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.489424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.489617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.489644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.104 [2024-07-23 03:34:05.489851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.104 [2024-07-23 03:34:05.489887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.104 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.490055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.490082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.490229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.490256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.490430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.490468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.490630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.490657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.490826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.490854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.491045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.491072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.491240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.491267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.491465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.491492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.491688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.491715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.491859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.491889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.492064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.492092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.492265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.492291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.492486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.492513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.492647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.492682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.492854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.492891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.493060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.493087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.493269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.493295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.493484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.493511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.493668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.493696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.493844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.493882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.494029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.494060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.494195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.494221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.494393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.494419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.494587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.494624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.494778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.494806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.494976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.495003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.105 qpair failed and we were unable to recover it. 00:34:39.105 [2024-07-23 03:34:05.495172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.105 [2024-07-23 03:34:05.495209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.495390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.495417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.495591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.495633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.495792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.495819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.495991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.496018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.496216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.496243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.496411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.496448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.496617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.496645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.496815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.496842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.497039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.497065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.497236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.497263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.497432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.497460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.497633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.497669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.497866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.497898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.498094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.498119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.498264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.498298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.498469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.498495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.498697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.498724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.498873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.498901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.499072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.499102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.499243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.499281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.499433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.499460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.499611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.499642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.500611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.500661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.500841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.500881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.501027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.501054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.106 qpair failed and we were unable to recover it. 00:34:39.106 [2024-07-23 03:34:05.501223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.106 [2024-07-23 03:34:05.501250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.501421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.501447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.501631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.501669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.501808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.501834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.502013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.502043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.502247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.502273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.502469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.502495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.502665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.502691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.502870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.502898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.503047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.503074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.503244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.503271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.503479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.503505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.503652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.503679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.503827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.503853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.504053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.504081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.504284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.504313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.504459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.504483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.504657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.504683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.504854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.504880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.107 [2024-07-23 03:34:05.505077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.107 [2024-07-23 03:34:05.505104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.107 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.505304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.505335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.505489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.505515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.505656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.505683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.505853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.505879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.506063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.506090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.506266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.506291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.506463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.506487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.506640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.506666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.506811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.506837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.506981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.507006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.507154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.507180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.507327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.507353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.507502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.507529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.507699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.507725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.507878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.507903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.508044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.508069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.508238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.508265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.508404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.508430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.508589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.508620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.508771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.508797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.508982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.509008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.509199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.509224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.509397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.509423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.509593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.509626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.509788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.509814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.108 qpair failed and we were unable to recover it. 00:34:39.108 [2024-07-23 03:34:05.509997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.108 [2024-07-23 03:34:05.510023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.510166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.510193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.510390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.510419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.510597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.510630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.510791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.510817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.510969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.510994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.511190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.511215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.511405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.511430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.511575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.511602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.511772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.511798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.511946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.511971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.512143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.512167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.512335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.512361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.512532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.512568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.512729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.512755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.512930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.512956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.513162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.513188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.513365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.513390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.513558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.513583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.513743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.513769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.513901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.513928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.514074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.514100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.514248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.514275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.514444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.514469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.514636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.514663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.514801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.109 [2024-07-23 03:34:05.514826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.109 qpair failed and we were unable to recover it. 00:34:39.109 [2024-07-23 03:34:05.515012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.515037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.515212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.515238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.515372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.515398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.515589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.515618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.515809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.515834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.516003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.516029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.516203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.516228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.516416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.516442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.516594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.516625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.516804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.516830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.516987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.517012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.517203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.517229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.517386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.517412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.517587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.517618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.517796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.517822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.517968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.517993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.518145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.518171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.518322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.518350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.518547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.518573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.518751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.518777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.518936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.518963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.519158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.519183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.519353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.519380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.519550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.519576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.519791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.519819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.519967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.519993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.520163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.520190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.520336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.520361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.520538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.520564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.520713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.520740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.520887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.520913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.521095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.521120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.521288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.521314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.110 qpair failed and we were unable to recover it. 00:34:39.110 [2024-07-23 03:34:05.521481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.110 [2024-07-23 03:34:05.521506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.521677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.521703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.521873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.521900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.522091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.522116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.522264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.522290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.522440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.522465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.522666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.522692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.522869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.522895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.523063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.523089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.523249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.523274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.523448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.523474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.523622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.523652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.523854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.523880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.524054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.524080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.524249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.524275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.524414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.524439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.524593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.524624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.524763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.524789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.524934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.524960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.525158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.525184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.525352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.525377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.525545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.525570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.525737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.525764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.525933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.525959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.526131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.526158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.526330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.526356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.526527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.526552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.111 qpair failed and we were unable to recover it. 00:34:39.111 [2024-07-23 03:34:05.526728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.111 [2024-07-23 03:34:05.526753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.526906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.526931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.527074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.527100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.527245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.527270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.527440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.527466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.527632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.527658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.527810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.527836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.527990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.528015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.528157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.528182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.528335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.528361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.528559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.528590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.528753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.528782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.528929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.528955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.529154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.529180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.529350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.529378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.529548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.529573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.529731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.529758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.529896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.529922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.530115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.530140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.530306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.530331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.530473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.530499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.530646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.530678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.530854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.530890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.531059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.531084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.531255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.531280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.531456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.531482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.112 [2024-07-23 03:34:05.531679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.112 [2024-07-23 03:34:05.531706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.112 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.531843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.531867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.532048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.532074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.532216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.532242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.532384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.532410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.532554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.532580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.532763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.532790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.532971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.532996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.533182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.533208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.533400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.533425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.533589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.533619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.533766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.533793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.533935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.533960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.534161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.534186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.534331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.534357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.534518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.534543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.534697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.534722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.534898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.534924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.535127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.535154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.535328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.535353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.535492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.535518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.535670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.535696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.535842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.535867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.536048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.536074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.536272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.536297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.536456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.536482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.536640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.536667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.536808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.536834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.537030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.537055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.537194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.537220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.537392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.537418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.537560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.537586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.537733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.113 [2024-07-23 03:34:05.537759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.113 qpair failed and we were unable to recover it. 00:34:39.113 [2024-07-23 03:34:05.537952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.537978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.538122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.538149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.538324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.538350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.538520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.538545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.538692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.538717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.538890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.538916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.539087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.539112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.539262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.539287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.539461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.539487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.539628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.539662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.539805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.539830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.539980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.540007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.540184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.540210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.540378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.540403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.540575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.540601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.540774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.540800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.540946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.540971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.541153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.541179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.541314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.541339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.541479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.541504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.541698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.541728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.541901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.541927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.542072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.542098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.542273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.542298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.542492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.542517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.542707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.542745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.542918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.542944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.543140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.543166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.543310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.543335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.543482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.543509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.543661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.543687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.543841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.543867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.544033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.544059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.544190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.544216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.544368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.544394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.544570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.544594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.544776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.544802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.544972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.544998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.545173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.545200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.545367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.545392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.545536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.545562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.114 qpair failed and we were unable to recover it. 00:34:39.114 [2024-07-23 03:34:05.545739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.114 [2024-07-23 03:34:05.545765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.545928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.545953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.546103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.546129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.546268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.546295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.546500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.546525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.546707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.546733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.546869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.546898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.547060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.547086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.547255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.547282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.547448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.547474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.547647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.547673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.547837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.547863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.548059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.548084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.548228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.548253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.548449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.548476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.548622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.548648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.548787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.548813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.548976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.549003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.549168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.549193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.549333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.549357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.549556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.549583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.549751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.549791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.549984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.550011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.550186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.550212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.550356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.550382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.550549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.550575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.550761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.550787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.550927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.550953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.551122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.551147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.551314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.551340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.551511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.551536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.551689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.551717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.551920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.551947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.552147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.552178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.552352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.552379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.552576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.552603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.552751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.552778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.552969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.552996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.553192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.553218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.553418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.553444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.115 [2024-07-23 03:34:05.553622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.115 [2024-07-23 03:34:05.553648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.115 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.553799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.553825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.553991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.554017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.554216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.554242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.554411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.554437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.554608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.554640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.554787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.554812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.554988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.555015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.555165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.555190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.555397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.555422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.555566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.555592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179840 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.555762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.555802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.556005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.556031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.556210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.556238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.556409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.556435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.556576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.556602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.556782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.556808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.556983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.557009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.557155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.557197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.557402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.557428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.557598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.557635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.557785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.557811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.557987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.558027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.558239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.558265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.558437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.558475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.558673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.558700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.558896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.558921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.559085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.559111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.559299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.559324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.559514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.559539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.559717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.559743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.559916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.559942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.560109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.560135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.560272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.560297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.560499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.560526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.560696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.560722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.560884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.560910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.561083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.561109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.561271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.561297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.561469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.561495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.561647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.116 [2024-07-23 03:34:05.561675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.116 qpair failed and we were unable to recover it. 00:34:39.116 [2024-07-23 03:34:05.561843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.561869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.562009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.562037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.562213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.562240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.562388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.562414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.562585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.562611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.562917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.562944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.563141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.563167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.563314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.563339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.563505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.563531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.563697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.563723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.563865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.563891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.564076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.564104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.564272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.564298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.564473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.564499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.564695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.564722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.564918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.564944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.565110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.565137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.565300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.565326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.565465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.565490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.565663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.565693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.565837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.565863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.566004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.566030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.566170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.566197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.566369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.566395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.566544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.566569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.566722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.566749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.566917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.566944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.567082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.567108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.567313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.567339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.567482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.567509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.567662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.567689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.567882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.567908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.568077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.568103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.568255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.568282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.568427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.117 [2024-07-23 03:34:05.568453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.117 qpair failed and we were unable to recover it. 00:34:39.117 [2024-07-23 03:34:05.568630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.568657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.568797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.568823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.568963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.568989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.569143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.569169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.569344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.569369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.569541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.569568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.569741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.569768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.569935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.569961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.570129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.570154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.570325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.570351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.570520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.570545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.570741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.570767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.570938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.570964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.571138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.571164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.571296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.571328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.571475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.571501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.571651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.571677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.571855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.571880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.572056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.572082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.572250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.572276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.572419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.572445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.572625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.572652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.572822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.572847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.572996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.573021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.573193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.573223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.573393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.573418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.573565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.573590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.573764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.573790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.573930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.573955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.574104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.574129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.574278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.574304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.574486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.574511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.574695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.574721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.574885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.574911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.575080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.575107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.575242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.575268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.575409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.575436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.575593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.575626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.575834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.575877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.576054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.118 [2024-07-23 03:34:05.576083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.118 qpair failed and we were unable to recover it. 00:34:39.118 [2024-07-23 03:34:05.576264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.576290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.576460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.576487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.576664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.576691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.576840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.576867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.577062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.577087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.577257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.577283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.577416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.577441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.577628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.577667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.577813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.577840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.578005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.578030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.578226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.578251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.578403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.578430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.578627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.578654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.578821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.578846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.579052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.579078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.579254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.579281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.579480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.579505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.579696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.579723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.579872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.579898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.580041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.580067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.580235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.580261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.580450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.580477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.580642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.580669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.580839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.580866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.581038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.581069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.581210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.581237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.581421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.581446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.581624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.581651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.581799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.581825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.582005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.582031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.582177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.582204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.582378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.582403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.582599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.582630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.582836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.582862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.583052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.583078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.583265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.583290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.583487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.583513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.583655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.583690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.583874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.583901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.119 [2024-07-23 03:34:05.584044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.119 [2024-07-23 03:34:05.584069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.119 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.584278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.584303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.584480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.584507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.584657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.584689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.584866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.584892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.585096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.585121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.585294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.585320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.585459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.585484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.585659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.585695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.585840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.585865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.586061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.586086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.586262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.586287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.586454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.586481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.586683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.586709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.586903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.586929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.587103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.587129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.587325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.587350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.587500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.587527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.587672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.587700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.587875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.587901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.588077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.588102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.588271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.588298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.588446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.588472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.588644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.588670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.588839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.588865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.589004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.589035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.589183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.589209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.589406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.589432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.589575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.589601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.589778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.589803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.589975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.590000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.590171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.590197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.590371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.590396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.590569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.590594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.590758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.590784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.590952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.120 [2024-07-23 03:34:05.590978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.120 qpair failed and we were unable to recover it. 00:34:39.120 [2024-07-23 03:34:05.591175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.591200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.591340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.591365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.591508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.591535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.591704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.591731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.591888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.591914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.592144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.592169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.592370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.592396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.592563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.592589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.592772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.592799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.592964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.592989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.593190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.593215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.593383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.593410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.593605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.593647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.593795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.593821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.594018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.594044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.594241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.594266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.594464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.594489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.594658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.594687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.594889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.594914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.595057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.595084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.595243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.595269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.595470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.595496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.595640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.595666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.595843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.595869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.596012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.596037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.596184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.596210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.596386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.596412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.596584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.596609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.596802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.596827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.597030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.597060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.597238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.597264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.597459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.597485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.597655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.597682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.597860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.597886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.598059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.598084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.598296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.598322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.598496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.598523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.598679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.598706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.598885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.598912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.599163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.599188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.599329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.599354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.599541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.599567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.599739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.599766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.599945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.599971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.600219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.600246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.121 qpair failed and we were unable to recover it. 00:34:39.121 [2024-07-23 03:34:05.600407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.121 [2024-07-23 03:34:05.600433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.600637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.600663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.600848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.600874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.601045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.601070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.601213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.601238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.601408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.601433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.601600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.601638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.601784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.601810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.601998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.602023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.602195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.602220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.602389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.602415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.602595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.602628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.602781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.602807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.602978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.603003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.603175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.603201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.603343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.603369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.603542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.603567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.603718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.603744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.603894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.603920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.604054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.604079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.604224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.604251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.604420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.604448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.604625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.604651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.604857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.604884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.605077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.605106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.605241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.605266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.605440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.605466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.605667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.605694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.605872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.605898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.606070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.606095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.606266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.606292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.606435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.606461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.606663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.606688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.606867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.606892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.607038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.607064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.607238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.607265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.607463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.607489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.607639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.607664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.607840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.607865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.608038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.608064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.122 [2024-07-23 03:34:05.608258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.122 [2024-07-23 03:34:05.608283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.122 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.608461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.608488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.608675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.608702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.608884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.608909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.609063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.609090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.609263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.609289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.609437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.609463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.609637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.609663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.609816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.609843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.610012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.610038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.610214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.610239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.610437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.610463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.610662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.610689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.610867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.610894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.611064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.611091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.611244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.611270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.611473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.611499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.611663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.611690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.611856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.611881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.612054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.612081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.612272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.612297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.612439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.612465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.612665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.612692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.612870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.612896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.613067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.613098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.613272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.613300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.613475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.613501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.613642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.613668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.613844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.613871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.614043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.614069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.614263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.614289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.614462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.614488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.614633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.614661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.614834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.614861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.615060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.615086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.615228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.615253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.615419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.615445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.615623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.615651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.615857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.615883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.616064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.616090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.123 [2024-07-23 03:34:05.616288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.123 [2024-07-23 03:34:05.616314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.123 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.616480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.616505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.616677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.616705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.616881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.616908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.617084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.617110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.617252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.617278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.617449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.617475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.617672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.617699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.617878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.617905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.618105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.618132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.618274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.618302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.618464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.618504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.618663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.618692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.618861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.618887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.619046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.619070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.619216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.619243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.619416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.619442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.619638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.619665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.619842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.619867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.620062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.620087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.620244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.620270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.620480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.620506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.620654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.620680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.620818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.620841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.621016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.621048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.621218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.621244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.621420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.621445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.621644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.621670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.621838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.621862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.622004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.622029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.622198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.622223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.622397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.622422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.622592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.622622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.622780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.124 [2024-07-23 03:34:05.622806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.124 qpair failed and we were unable to recover it. 00:34:39.124 [2024-07-23 03:34:05.622946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.622971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.623111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.623136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.623308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.623333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.623467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.623492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.623690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.623716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.623863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.623888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.624061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.624086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.624232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.624259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.624403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.624429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.624606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.624638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.624805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.624831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.624999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.625024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.625190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.625215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.625392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.625417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.625589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.625621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.625796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.625822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.125 [2024-07-23 03:34:05.625958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.125 [2024-07-23 03:34:05.625982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.125 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.626163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.626189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.626325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.626351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.626500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.626524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.626720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.626746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.626902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.626927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.627070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.627096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.627278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.627304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.627452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.627477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.627676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.627703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.403 qpair failed and we were unable to recover it. 00:34:39.403 [2024-07-23 03:34:05.627854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.403 [2024-07-23 03:34:05.627880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.628052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.628078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.628245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.628271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.628445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.628471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.628643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.628673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.628813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.628838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.629013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.629038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.629204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.629230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.629398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.629423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.629590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.629620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.629799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.629823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.629991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.630016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.630161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.630186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.630324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.630350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.630518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.630542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.630749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.630775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.630922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.630949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.631145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.631171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.631356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.631381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.631548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.631574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.631771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.631796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.631970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.631995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.632162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.632187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.632358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.632385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.632531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.632558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.632750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.632776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.632923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.632948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.633147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.633174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.633347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.633374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.633534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.633558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.633710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.633736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.633879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.633904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.634076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.634101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.634299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.634325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.634487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.634512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.634684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.634711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.634911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.634937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.404 [2024-07-23 03:34:05.635106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.404 [2024-07-23 03:34:05.635132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.404 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.635284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.635309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.635453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.635479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.635673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.635699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.635897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.635923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.636099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.636125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.636266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.636290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.636442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.636471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.636644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.636671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.636842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.636868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.637035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.637060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.637238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.637265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.637463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.637488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.637631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.637656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.637834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.637859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.638055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.638080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.638246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.638271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.638421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.638449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.638623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.638649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.638819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.638844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.639015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.639042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.639187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.639213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.639383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.639409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.639584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.639610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.639778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.639804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.639998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.640023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.640173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.640200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.640396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.640422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.640564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.640589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.640771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.640797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.640970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.640996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.641154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.641180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.641328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.641354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.641546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.641570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.641740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.641781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.641944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.641973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.642146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.642174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.642318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.642345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.405 qpair failed and we were unable to recover it. 00:34:39.405 [2024-07-23 03:34:05.642519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.405 [2024-07-23 03:34:05.642546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.642712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.642739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.642933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.642959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.643156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.643183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.643378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.643404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.643545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.643571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.643774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.643801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.643941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.643967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.644147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.644173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.644351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.644383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.644564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.644590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.644763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.644789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.644939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.644965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.645108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.645133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.645316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.645341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.645488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.645515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.645712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.645739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.645912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.645939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.646093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.646120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.646286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.646312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.646478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.646504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.646679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.646717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.646912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.646938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.647113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.647139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.647312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.647339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.647508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.647536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.647708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.647734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.647879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.647905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.648050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.648078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.648271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.648298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.648470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.648496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.648673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.648699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.648852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.648877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.649021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.649048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.649199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.649225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.649398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.649424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.649628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.649655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.649797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.406 [2024-07-23 03:34:05.649823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.406 qpair failed and we were unable to recover it. 00:34:39.406 [2024-07-23 03:34:05.649999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.650025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.650185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.650211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 596822 Killed "${NVMF_APP[@]}" "$@" 00:34:39.407 [2024-07-23 03:34:05.650358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.650385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.650533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.650559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.650745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.650773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:39.407 [2024-07-23 03:34:05.650918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.650944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:39.407 [2024-07-23 03:34:05.651137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.651162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:39.407 [2024-07-23 03:34:05.651358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.651383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.407 [2024-07-23 03:34:05.651584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.651610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b9 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.407 0 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.651799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.651824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.651972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.651999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.652170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.652196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.652364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.652390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.652562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.652588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.652743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.652770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.652908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.652933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.653109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.653135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.653271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.653296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.653460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.653487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.653658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.653684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.653822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.653850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.654020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.654046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.654215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.654246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.654393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.654422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.654598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.654631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.654786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.654812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.654963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.654990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.655162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.655188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.655363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.655388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.655529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.655556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.655762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.655789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.655963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.655990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.656139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.656165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.656361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.656386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.656584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.407 [2024-07-23 03:34:05.656610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.407 qpair failed and we were unable to recover it. 00:34:39.407 [2024-07-23 03:34:05.656768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.656794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=597372 00:34:39.408 [2024-07-23 03:34:05.656946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.656974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 597372 00:34:39.408 [2024-07-23 03:34:05.657151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.657177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.657319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 597372 ']' 00:34:39.408 [2024-07-23 03:34:05.657346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.408 [2024-07-23 03:34:05.657520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.657547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:39.408 [2024-07-23 03:34:05.657723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.657750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.408 [2024-07-23 03:34:05.657937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:39.408 [2024-07-23 03:34:05.657964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 03:34:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.408 [2024-07-23 03:34:05.658108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.658136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.658280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.658307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.658739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.658769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.658977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.659005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.659183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.659214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.659372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.659401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.659572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.659599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.659764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.659791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.659964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.659991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.660163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.660191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.660365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.660392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.660561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.660587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.660769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.660795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.660948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.660975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.661150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.661178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.664628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.664676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.664892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.664923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.665082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.665113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.665301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.408 [2024-07-23 03:34:05.665330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.408 qpair failed and we were unable to recover it. 00:34:39.408 [2024-07-23 03:34:05.665486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.665514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.665704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.665733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.665913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.665942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.666123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.666154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.666362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.666393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.666574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.666603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.666792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.666821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.667028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.667059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.667268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.667299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.667455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.667486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.667695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.667732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.667888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.667914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.668090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.668117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.668295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.668323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.668526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.668557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.668752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.668783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.668958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.668988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.669167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.669197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.669383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.669412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.669585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.669621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.669773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.669802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.669955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.669983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.670157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.670186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.670363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.670392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.670574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.670603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.670824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.670854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.671033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.671065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.671246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.671275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.671432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.671461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.671608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.671646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.671839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.671869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.672077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.672105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.672556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.672585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.672766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.672795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.672950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.672977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.673121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.673148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.673311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.673338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.409 [2024-07-23 03:34:05.673480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.409 [2024-07-23 03:34:05.673506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.409 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.673671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.673698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.673867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.673894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.674068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.674094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.674270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.674297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.674450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.674478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.674624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.674652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.674798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.674825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.675032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.675058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.675201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.675228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.675434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.675461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.675604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.675636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.675810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.675837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.675988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.676020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.676228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.676254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.676400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.676427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.676598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.676629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.676812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.676839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.676994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.677023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.677170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.677197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.677367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.677392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.677597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.677632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.677777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.677805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.677981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.678008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.678167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.678193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.678367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.678393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.678555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.678581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.678807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.678835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.678982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.679008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.679147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.679173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.679354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.679381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.679527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.679553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.679704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.679731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.679902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.679929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.680100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.680126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.680299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.680325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.680495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.680521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.680666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.410 [2024-07-23 03:34:05.680694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.410 qpair failed and we were unable to recover it. 00:34:39.410 [2024-07-23 03:34:05.680866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.680892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.681086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.681112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.681264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.681290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.681435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.681461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.681642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.681669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.681816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.681842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.682010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.682035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.682207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.682233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.682406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.682433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.682578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.682605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.682782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.682810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.682983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.683010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.683187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.683214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.683374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.683400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.683568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.683594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.683777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.683808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.683943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.683970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.684143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.684170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.684317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.684344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.684492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.684520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.684720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.684748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.684897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.684924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.685064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.685092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.685261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.685288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.685484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.685511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.685660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.685687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.685855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.685881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.686077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.686104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.686265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.686291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.686468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.686495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.686666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.686694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.686843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.686872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.687014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.687042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.687206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.687234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.687398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.687425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.687598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.687631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.687798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.687825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.687970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.411 [2024-07-23 03:34:05.687996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.411 qpair failed and we were unable to recover it. 00:34:39.411 [2024-07-23 03:34:05.688143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.688169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.688320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.688347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.688517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.688544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.688698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.688727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.688881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.688909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.689077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.689104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.689280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.689308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.689476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.689504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.689707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.689734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.689883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.689921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.690091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.690119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.690315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.690342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.690507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.690542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.690745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.690772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.690919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.690946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.691084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.691110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.691283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.691310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.691473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.691504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.691665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.691692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.691845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.691873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.692062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.692089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.692270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.692296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.692473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.692500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.692689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.692717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.692862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.692889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.693069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.693095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.693271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.693297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.693474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.693500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.693647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.693674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.693821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.693849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.694030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.694057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.694230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.694257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.694403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.694429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.694627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.694655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.694832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.412 [2024-07-23 03:34:05.694859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.412 qpair failed and we were unable to recover it. 00:34:39.412 [2024-07-23 03:34:05.695031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.695064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.695234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.695261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.695407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.695434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.695645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.695673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.695835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.695861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.696070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.696096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.696265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.696292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.696435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.696461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.696625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.696653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.696865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.696892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.697039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.697066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.697248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.697276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.697485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.697512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.697655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.697682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.697856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.697883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.698043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.698070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.698220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.698248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.698405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.698432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.698604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.698643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.698819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.698847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.699027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.699054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.699237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.699263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.699435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.699466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.699607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.699639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.699810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.699836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.699985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.700012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.700190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.700217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.700420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.700447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.700640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.700667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.700843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.700870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.701046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.701072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.701250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.701276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.701450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.701476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.701621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.413 [2024-07-23 03:34:05.701648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.413 qpair failed and we were unable to recover it. 00:34:39.413 [2024-07-23 03:34:05.701822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.701848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.702000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.702028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.702190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.702230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.702452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.702492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.702683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.702712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.702859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.702887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.703043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.703069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.703222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.703248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.703391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.703418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.703561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.703588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.703769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.703796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.703974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.704001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.704168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.704194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.704390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.704416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.704587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.704631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.704811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.704841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.705021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.705048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.705218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.705245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.705382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.705409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.705551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.705578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.705735] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:39.414 [2024-07-23 03:34:05.705763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.705791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.705807] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.414 [2024-07-23 03:34:05.705964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.705990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.706160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.706185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.706324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.706349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.706517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.706544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.706696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.706724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.706893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.706931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.707103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.707134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.707306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.707334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.707484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.707513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.707695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.707723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.707919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.707946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.708110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.708137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.708310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.708336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.708521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.708548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.708732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.708760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.414 [2024-07-23 03:34:05.708931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.414 [2024-07-23 03:34:05.708958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.414 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.709101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.709141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.709344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.709371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.709541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.709568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.709726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.709754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.709956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.709983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.710135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.710162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.710335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.710362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.710557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.710584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.710746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.710773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.710954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.710981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.711147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.711174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.711367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.711394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.711563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.711590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.711770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.711797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.711945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.711972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.712180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.712206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.712358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.712385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.712537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.712564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.712709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.712736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.712884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.712925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.713123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.713150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.713307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.713334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.713477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.713506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.713688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.713719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.713931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.713957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.714105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.714131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.714303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.714329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.714478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.714505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.714708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.714736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.714882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.714919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.715096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.715125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.715323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.715349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.715554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.715581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.715738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.715767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.715916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.715942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.716094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.716120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.716294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.716320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.415 qpair failed and we were unable to recover it. 00:34:39.415 [2024-07-23 03:34:05.716499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.415 [2024-07-23 03:34:05.716526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.716682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.716710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.716854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.716880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.717066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.717094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.717281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.717307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.717482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.717509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.717842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.717871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.718079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.718107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.718258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.718286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.718457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.718483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.718658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.718692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.718863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.718890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.719062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.719088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.719260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.719287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.719466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.719491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.719667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.719694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.719902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.719928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.720231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.720262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.720457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.720484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.720633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.720660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.720811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.720837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.721030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.721058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.721256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.721282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.721430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.721457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.721605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.721641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.721808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.721835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.722023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.722049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.722202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.722229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.722398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.722425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.722597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.722633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.722841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.722867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.723051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.723077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.723251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.723278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.723482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.723513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.723719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.723747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.723921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.723948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.724144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.724169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.724343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.416 [2024-07-23 03:34:05.724368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.416 qpair failed and we were unable to recover it. 00:34:39.416 [2024-07-23 03:34:05.724565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.724591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.724781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.724808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.724980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.725007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.725179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.725204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.725377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.725402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.725573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.725600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.725780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.725806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.725982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.726008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.726183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.726208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.726360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.726386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.726536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.726562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.726762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.726803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.726981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.727011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.727195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.727223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.727366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.727392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.727562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.727589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.727789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.727816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.727970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.728006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.728206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.728233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.728384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.728411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.728556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.728584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.728764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.728792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.728946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.728974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.729154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.729181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.729351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.729378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.729529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.729556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.729729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.729756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.729937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.729963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.730140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.730166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.730311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.730337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.730504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.730530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.730679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.417 [2024-07-23 03:34:05.730706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.417 qpair failed and we were unable to recover it. 00:34:39.417 [2024-07-23 03:34:05.730847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.730873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.731054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.731085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.731254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.731281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.731472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.731503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.731657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.731685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.731836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.731862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.732055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.732081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.732253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.732281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.732449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.732476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.732623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.732653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.732856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.732883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.733044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.733070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.733254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.733280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.733447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.733472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.733666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.733693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.733834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.733859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.734003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.734028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.734233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.734259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.734433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.734461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.734602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.734638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.734807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.734834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.735002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.735029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.735211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.735238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.735411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.735449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.735649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.735676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.735869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.735895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.736040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.736067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.736243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.736280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.736450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.736477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.736647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.736675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.736852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.736880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.737053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.737081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.737290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.737317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.737490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.737516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.737690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.737716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.737874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.737899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.738047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.738074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.738217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.418 [2024-07-23 03:34:05.738244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.418 qpair failed and we were unable to recover it. 00:34:39.418 [2024-07-23 03:34:05.738415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.738440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.738611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.738642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.738812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.738838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.739006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.739032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.739210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.739237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.739412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.739442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.739608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.739642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.739809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.739835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.739981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.740006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.740182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.740209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.740355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.740381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.740579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.740623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.740798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.740824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.740970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.740995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.741170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.741208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.741383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.741410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.741570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.741628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.741811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.741841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.741992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.742019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.742207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.742235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.742444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.742470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.742652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.742680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.742828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.742855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.743029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.743056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.743230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.743256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.743426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.743453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.743647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.743675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.419 [2024-07-23 03:34:05.743822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.743849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.744033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.744060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.744214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.744241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.744429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.744455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.744634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.744662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.744867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.744893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.745038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.745064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.745212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.745239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.745411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.745437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.745589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.745623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.419 [2024-07-23 03:34:05.745797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.419 [2024-07-23 03:34:05.745824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.419 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.745989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.746015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.746162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.746201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.746397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.746425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.746634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.746662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.746832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.746859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.747026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.747052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.747249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.747276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.747435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.747467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.747667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.747694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.747863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.747891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.748049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.748076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.748229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.748256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.748415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.748442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.748592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.748626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.748811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.748838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.749034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.749061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.749219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.749248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.749411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.749439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.749623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.749651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.749835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.749862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.750036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.750063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.750207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.750234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.750390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.750417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.750596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.750634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.750829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.750856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.751000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.751027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.751231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.751259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.751464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.751491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.751678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.751706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.751877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.751904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.752074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.752101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.752289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.752317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.752517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.752543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.752687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.752714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.752895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.752923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.753076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.753103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.753280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.420 [2024-07-23 03:34:05.753307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.420 qpair failed and we were unable to recover it. 00:34:39.420 [2024-07-23 03:34:05.753481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.753508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.753683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.753711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.753858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.753885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.754081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.754107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.754308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.754334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.754503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.754530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.754711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.754739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.754912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.754939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.755109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.755135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.755311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.755338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.755508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.755540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.755748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.755775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.755951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.755977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.756148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.756173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.756371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.756397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.756567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.756595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.756803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.756832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.756994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.757021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.757194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.757221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.757389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.757414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.757585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.757627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.757835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.757861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.758035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.758061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.758255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.758281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.758458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.758485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.758698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.758738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.758922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.758951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.759132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.759159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.759306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.759333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.759528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.759555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.759731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.759759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.759929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.759955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.760132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.760159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.760356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.760382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.760574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.760624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.760804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.760831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.761011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.761037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.761209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.421 [2024-07-23 03:34:05.761235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.421 qpair failed and we were unable to recover it. 00:34:39.421 [2024-07-23 03:34:05.761430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.761455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.761632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.761659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.761832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.761858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.762038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.762064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.762236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.762261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.762433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.762459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.762626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.762653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.762851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.762877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.763035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.763062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.763238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.763265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.763431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.763458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.763624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.763651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.763804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.763835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.764009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.764035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.764204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.764230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.764425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.764451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.764633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.764660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.764808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.764834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.765007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.765032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.765177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.765204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.765356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.765382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.765553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.765578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.765733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.765760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.765906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.765932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.766085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.766110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.766252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.766279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.766479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.766505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.766657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.766685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.766882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.766916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.767152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.767178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.767386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.767412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.767583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.422 [2024-07-23 03:34:05.767624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.422 qpair failed and we were unable to recover it. 00:34:39.422 [2024-07-23 03:34:05.767800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.767826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.767983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.768009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.768183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.768209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.768380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.768406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.768574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.768599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.768804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.768831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.768986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.769011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.769190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.769216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.769373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.769398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.769573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.769598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.769749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.769775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.769979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.770004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.770162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.770188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.770369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.770394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.770543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.770569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.770759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.770787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.770948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.770974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.771166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.771191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.771385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.771412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.771565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.771593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.771783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.771815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.771982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.772008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.772165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.772191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.772392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.772417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.772627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.772655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.772806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.772833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.773014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.773042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.773188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.773215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.773354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.773381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.773556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.773582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.773774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.773801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.773945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.773971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.774169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.774196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.774339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.774367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.774546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.774573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.774755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.774782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.774924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.774952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.423 [2024-07-23 03:34:05.775152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.423 [2024-07-23 03:34:05.775178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.423 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.775343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.775370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.775541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.775567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.775720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.775747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.775919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.775945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.776120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.776146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.776339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.776365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.776543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.776568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.776779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.776807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.776988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.777016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.777220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.777246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.777411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.777437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.777626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.777652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.777823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.777849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.778024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.778050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.778202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.778227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.778403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.778431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.778631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.778658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.778806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.778832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.779014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.779040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.779205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.779230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.779399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.779424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.779591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.779625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.779910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.779939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.780138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.780164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.780306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.780334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.780488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.780514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.780811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.780838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.781040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.781066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.781267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.781293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.781469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.781494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.781667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.781693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.781881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.781916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.782062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.782088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.782254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.782280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.782452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.782478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.782652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.782680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.782859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.424 [2024-07-23 03:34:05.782885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.424 qpair failed and we were unable to recover it. 00:34:39.424 [2024-07-23 03:34:05.783060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.783087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.783235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.783260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.783433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.783459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.783639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.783665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.783834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.783861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.784010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.784037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.784212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.784239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.784391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.784417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.784591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.784625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.784795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.784821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.785010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.785035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.785228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.785254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.785407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.785433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.785603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.785637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.785809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.785835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.786008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.786033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.786200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.786225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.786374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.786401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.786579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.786624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.786775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.786800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.786974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.786999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.787173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.787199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.787370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.787396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.787536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.787562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.787740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.787766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.787940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.787971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.788146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.788172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.788344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.788370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.788534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.788559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.788755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.788783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.788956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.788986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.789154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.789181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.789379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.789404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.789579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.789623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.789766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.789792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.789962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.789989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.790154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.790180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.425 [2024-07-23 03:34:05.790375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.425 [2024-07-23 03:34:05.790401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.425 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.790573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.790600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.790787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.790813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.790999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.791026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.791198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.791224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.791398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.791424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.791590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.791627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.791777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.791804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.791982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.792008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.792154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.792180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.792326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.792353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.792517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.792544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.792733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.792761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.792969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.792995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.793188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.793213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.793396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.793422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.793590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.793626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.793825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.793851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.794019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.794047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.794219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.794245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.794412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.794438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.794611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.794643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.794797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.794823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.795032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.795058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.795227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.795254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.795393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.795420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.795585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.795620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.795793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.795818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.795965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.795995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.796147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.796173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.796351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.796378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.796547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.426 [2024-07-23 03:34:05.796573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.426 qpair failed and we were unable to recover it. 00:34:39.426 [2024-07-23 03:34:05.796752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.796779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.796977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.797004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.797197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.797224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.797373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.797399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.797558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.797584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.797786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.797813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.798018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.798045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.798217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.798243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.798434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.798460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.798657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.798683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.798833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.798858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.799035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.799061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.799212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.799238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.799415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.799441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.799623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.799649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.799793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.799820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.799962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.799988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.800162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.800189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.800388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.800413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.800564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.800590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.800800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.800826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.800840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:39.427 [2024-07-23 03:34:05.800999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.801027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.801202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.801228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.801382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.801408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.801611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.801646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.801805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.801830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.802029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.802055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.802251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.802278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.802445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.802473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.802634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.802662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.802801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.802828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.802992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.803018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.803178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.803203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.803368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.803395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.803551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.803578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.803801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.803828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.803999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.427 [2024-07-23 03:34:05.804040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.427 qpair failed and we were unable to recover it. 00:34:39.427 [2024-07-23 03:34:05.804244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.804271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.804472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.804499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.804685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.804713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.804887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.804914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.805093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.805121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.805300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.805326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.805508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.805536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.805694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.805722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.805865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.805893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.806100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.806127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.806326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.806352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.806523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.806551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.806703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.806736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.806917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.806944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.807092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.807119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.807316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.807342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.807539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.807565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.807720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.807748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.807945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.807971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.808148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.808175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.808344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.808371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.808571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.808598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.808807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.808834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.809018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.809045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.809212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.809238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.809382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.809409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.809627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.809654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.809835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.809862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.810041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.810067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.810238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.810264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.810435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.810461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.810611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.810646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.810820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.810847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.811000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.811027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.811168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.811196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.811395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.811422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.811606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.811642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.428 [2024-07-23 03:34:05.811785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.428 [2024-07-23 03:34:05.811812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.428 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.811995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.812022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.812174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.812201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.812400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.812427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.812619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.812650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.812853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.812879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.813067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.813093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.813364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.813390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.813575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.813619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.813792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.813818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.813983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.814009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.814152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.814179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.814316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.814343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.814519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.814545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.814715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.814741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.814886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.814912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.815105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.815131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.815274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.815299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.815475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.815501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.815655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.815680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.815850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.815875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.816060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.816085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.816250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.816275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.816446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.816473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.816687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.816714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.816867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.816904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.817098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.817124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.817290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.817315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.817524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.817551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.817740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.817768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.817944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.817971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.818142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.818168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.818331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.818357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.818510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.818536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.818681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.818709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.818856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.818883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.819087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.819112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.429 [2024-07-23 03:34:05.819272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.429 [2024-07-23 03:34:05.819297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.429 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.819473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.819500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.819674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.819702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.819866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.819892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.820059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.820085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.820221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.820252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.820427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.820466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.820642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.820669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.820849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.820876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.821075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.821101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.821285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.821311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.821465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.821492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.821657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.821699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.821918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.821946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.822124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.822153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.822303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.822331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.822509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.822535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.822706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.822732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.822869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.822896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.823100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.823127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.823270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.823297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.823469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.823496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.823703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.823731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.823904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.823931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.824110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.824138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.824336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.824363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.824502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.824529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.824706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.824734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.824909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.824935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.825115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.825143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.825293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.825321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.825520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.825547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.825759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.825787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.825932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.825960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.826159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.826185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.826361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.826388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.826590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.826628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.826834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.430 [2024-07-23 03:34:05.826861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.430 qpair failed and we were unable to recover it. 00:34:39.430 [2024-07-23 03:34:05.827058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.827085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.827225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.827252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.827451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.827477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.827629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.827657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.827826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.827853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.828020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.828046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.828222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.828249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.828421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.828452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.828654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.828681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.828823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.828849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.829021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.829047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.829184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.829211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.829380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.829407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.829556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.829582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.829771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.829799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.830009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.830035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.830224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.830250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.830420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.830445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.830624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.830650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.830804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.830831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.831004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.831031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.831207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.831233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.831407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.831434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.831622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.831650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.831826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.831852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.832026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.832053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.832225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.832251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.832402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.832429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.832581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.832609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.832763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.832789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.832961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.832986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.431 [2024-07-23 03:34:05.833155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.431 [2024-07-23 03:34:05.833182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.431 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.833337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.833364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.833530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.833556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.833748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.833775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.833958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.833984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.834167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.834194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.834360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.834387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.834550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.834577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.834748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.834774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.834943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.834969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.835144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.835171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.835356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.835383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.835579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.835623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.835823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.835849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.836025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.836051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.836223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.836251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.836428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.836459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 A controller has encountered a failure and is being reset. 00:34:39.432 [2024-07-23 03:34:05.836660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.836701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.836856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.836883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.837058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.837085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.837230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.837257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.837411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.837438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.837634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.837661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.837838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.837865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.838048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.838074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.838218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.838244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.838388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.838414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.838567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.838596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.838756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.838783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.838950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.838982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.839160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.839187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.839356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.839382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.839556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.839584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.839740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.839766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.839921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.839948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.840116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.840143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.840294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.840320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.432 [2024-07-23 03:34:05.840493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.432 [2024-07-23 03:34:05.840519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.432 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.840698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.840727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.840935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.840962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.841131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.841157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.841332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.841359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.841502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.841529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.841709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.841736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.841881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.841907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.842073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.842099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.842252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.842278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.842423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.842450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.842625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.842653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.842846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.842872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.843052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.843079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.843287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.843314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.843486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.843513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.843659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.843687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.843885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.843912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.844060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.844088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.844260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.844287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.844436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.844464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.844607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.844642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.844815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.844842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.845019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.845047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.845224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.845251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.845394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.845422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.845603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.845638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.845809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.845836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.845982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.846009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.846155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.846184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.846354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.846381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.846557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.846584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.846789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.846820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.847014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.847040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.847176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.847203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.847349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.847378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.847555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.847582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.847768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.847797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.433 qpair failed and we were unable to recover it. 00:34:39.433 [2024-07-23 03:34:05.847942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.433 [2024-07-23 03:34:05.847968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.848155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.848181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.848355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.848381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.848563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.848589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.848745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.848772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.848946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.848973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.849147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.849174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.849344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.849371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.849549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.849576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.849738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.849767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.849971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.849998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.850140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.850168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.850347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.850374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.850570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.850598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.850782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.850810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.850990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.851018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.851155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.851181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.851380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.851407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.851587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.851625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.851805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.851831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.851991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.852018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.852220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.852247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.852413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.852439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.852584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.852630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.852835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.852862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.853008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.853036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.853201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.853228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.853392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.853418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.853586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.853627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.853780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.853807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.853963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.853990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.854162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.854189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.854361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.854387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.854560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.854587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.854794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.854826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.854999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.855026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.855202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.855229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.855406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.434 [2024-07-23 03:34:05.855433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.434 qpair failed and we were unable to recover it. 00:34:39.434 [2024-07-23 03:34:05.855595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.855630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.855832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.855859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.856045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.856072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.856271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.856297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.856470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.856497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.856665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.856693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.856890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.856917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.857091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.857119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.857389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.857415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.857565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.857592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.857805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.857833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.857994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.858020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.858192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.858219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.858392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.858418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.858619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.858646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.858809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.858836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.859015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.859042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.859243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.859269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.859445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.859472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.859625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.859653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.859806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.859833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.859981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.860007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.860153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.860180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.860358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.860385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.860585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.860642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.860796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.860823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.861003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.861029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.861202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.861228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.861431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.861458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.861603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.861639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.861836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.861863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.862017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.862044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.862231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.862258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.862405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.862432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.862638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.862666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.862816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.862843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.863012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.863044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.435 qpair failed and we were unable to recover it. 00:34:39.435 [2024-07-23 03:34:05.863217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.435 [2024-07-23 03:34:05.863243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.863449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.863476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.863633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.863662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.863833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.863859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.864037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.864064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.864261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.864288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.864453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.864480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.864658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.864686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.864865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.864892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.865067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.865093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.865260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.865286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.865454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.865480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.865653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.865682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.865838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.865865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.866015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.866041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.866222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.866248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.866418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.866446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5008000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.866681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.866725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.866880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.866918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.867050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.867076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.867279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.867304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.867478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.867505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.867669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.867698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.867871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.867898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.868076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.868103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.868245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.868271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.868420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.868447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.868624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.868651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.868829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.868855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.869067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.869092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.869263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.869290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.869436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.869463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-07-23 03:34:05.869662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.436 [2024-07-23 03:34:05.869688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.869828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.869854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.870063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.870088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.870246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.870272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.870443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.870470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.870639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.870665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.870837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.870862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.871039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.871072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.871244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.871272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.871445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.871471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.871641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.871667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.871845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.871871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.872058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.872084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.872256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.872282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.872451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.872477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.872634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.872662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.872859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.872886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.873096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.873122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.873294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.873320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.873504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.873530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.873694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.873720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.873922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.873947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.874122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.874150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.874341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.874368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.874540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.874567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.874756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.874784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.874954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.874980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.875153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.875178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.875342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.875370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.875540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.875568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.875752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.875780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.875937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.875964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.876165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.876191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.876364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.876390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.876576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.876628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.876805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.876832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.877022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.877048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.437 [2024-07-23 03:34:05.877247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.437 [2024-07-23 03:34:05.877273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.437 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.877449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.877475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.877632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.877659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.877824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.877850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.878033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.878059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.878230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.878257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.878407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.878435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.878608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.878641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.878811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.878837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.879007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.879033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.879212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.879239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.879397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.879424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.879596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.879628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.879803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.879829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.880012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.880039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.880219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.880244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.880446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.880473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.880672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.880700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.880851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.880878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.881083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.881110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.881283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.881309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.881467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.881494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.881663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.881691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.881862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.881889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.882070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.882096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.882273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.882301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.882494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.882521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.882694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.882721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.882873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.882910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.883085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.883112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.883255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.883281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.883456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.883481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.883678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.883704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.883849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.883875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.884063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.884090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.884290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.884316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.884490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.884516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.884668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.884698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.438 qpair failed and we were unable to recover it. 00:34:39.438 [2024-07-23 03:34:05.884867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.438 [2024-07-23 03:34:05.884893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.885097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.885123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.885297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.885324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.885499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.885526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.885680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.885706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.885905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.885931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.886098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.886125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.886323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.886350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.886523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.886549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.886720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.886747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.886925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.886952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.887098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.887125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.887297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.887324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.887535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.887561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.887737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.887764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.887918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.887945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.888124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.888150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.888344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.888370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.888543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.888570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.888750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.888777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.888944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.888972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.889176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.889203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.889394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.889420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.889565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.889592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.889748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.889775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.889923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.889949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.890153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.890180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.890334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.890360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.890567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.890594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.890797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.890823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.890972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.890999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.891142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.891167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.891336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.891363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.891525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.891552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.891751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.891778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.891923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.891948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.892152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.892179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.892380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.439 [2024-07-23 03:34:05.892407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.439 qpair failed and we were unable to recover it. 00:34:39.439 [2024-07-23 03:34:05.892605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.892637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.892837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.892869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.893041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.893068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.893269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.893296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.893469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.893496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.893645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.893672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.893820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.893848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.894022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.894049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.894220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.894246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.894396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.894425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.894628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.894655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.894824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.894851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.895009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.895035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.895237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.895263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.895410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.895436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.895621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.895647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.895825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.895851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.896006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.896031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.896169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.896196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.896393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.896420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.896566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.896592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5018000b90 with addr=10.0.0.2, port=4420 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 qpair failed and we were unable to recover it. 00:34:39.440 [2024-07-23 03:34:05.896817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.440 [2024-07-23 03:34:05.896867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2187390 with addr=10.0.0.2, port=4420 00:34:39.440 [2024-07-23 03:34:05.896899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2187390 is same with the state(5) to be set 00:34:39.440 [2024-07-23 03:34:05.896926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2187390 (9): Bad file descriptor 00:34:39.440 [2024-07-23 03:34:05.896947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:39.440 [2024-07-23 03:34:05.896962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:39.440 [2024-07-23 03:34:05.896978] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:39.440 Unable to reset the controller. 00:34:39.440 [2024-07-23 03:34:05.909298] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.440 [2024-07-23 03:34:05.909342] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.440 [2024-07-23 03:34:05.909368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.440 [2024-07-23 03:34:05.909391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.440 [2024-07-23 03:34:05.909412] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.440 [2024-07-23 03:34:05.909522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:39.440 [2024-07-23 03:34:05.909588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:39.440 [2024-07-23 03:34:05.909652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:39.440 [2024-07-23 03:34:05.909642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:39.699 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:39.699 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:39.699 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.700 Malloc0 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.700 [2024-07-23 03:34:06.104406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.700 [2024-07-23 03:34:06.132647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.700 03:34:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 596850 00:34:40.633 Controller properly reset. 00:34:45.895 Initializing NVMe Controllers 00:34:45.895 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:45.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:45.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:45.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:45.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:45.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:45.895 Initialization complete. Launching workers. 00:34:45.895 Starting thread on core 1 00:34:45.895 Starting thread on core 2 00:34:45.895 Starting thread on core 3 00:34:45.895 Starting thread on core 0 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:45.895 00:34:45.895 real 0m10.608s 00:34:45.895 user 0m32.237s 00:34:45.895 sys 0m8.189s 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.895 ************************************ 00:34:45.895 END TEST nvmf_target_disconnect_tc2 00:34:45.895 ************************************ 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:45.895 rmmod nvme_tcp 00:34:45.895 rmmod nvme_fabrics 00:34:45.895 rmmod nvme_keyring 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 597372 ']' 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 597372 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 597372 ']' 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 597372 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 597372 00:34:45.895 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:45.896 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:45.896 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 597372' 00:34:45.896 killing process with pid 597372 00:34:45.896 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 597372 00:34:45.896 03:34:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 597372 00:34:45.896 03:34:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:45.896 03:34:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:45.896 03:34:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:45.896 03:34:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:45.896 03:34:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:45.896 03:34:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.896 03:34:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:45.896 03:34:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.800 03:34:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:47.800 00:34:47.800 real 0m15.181s 00:34:47.800 user 0m57.066s 00:34:47.800 sys 0m10.545s 00:34:47.800 03:34:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:47.800 03:34:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:47.800 ************************************ 00:34:47.800 END TEST nvmf_target_disconnect 00:34:47.800 ************************************ 00:34:47.800 03:34:14 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:47.800 03:34:14 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.800 03:34:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.800 03:34:14 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:47.800 00:34:47.800 real 27m4.740s 00:34:47.800 user 74m25.378s 00:34:47.800 sys 6m28.822s 00:34:47.800 03:34:14 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:47.800 03:34:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.800 ************************************ 00:34:47.800 END TEST nvmf_tcp 00:34:47.800 ************************************ 00:34:47.800 03:34:14 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:47.800 03:34:14 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:47.800 03:34:14 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:47.800 03:34:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:47.800 03:34:14 -- common/autotest_common.sh@10 -- # set +x 00:34:47.800 ************************************ 00:34:47.800 START TEST spdkcli_nvmf_tcp 00:34:47.800 ************************************ 00:34:47.800 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:47.800 * Looking for test storage... 00:34:47.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:47.800 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:47.800 03:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:47.800 03:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:47.800 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.800 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:47.800 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:47.801 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=598447 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 598447 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 598447 ']' 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:48.060 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.060 [2024-07-23 03:34:14.423321] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:48.060 [2024-07-23 03:34:14.423404] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598447 ] 00:34:48.060 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.060 [2024-07-23 03:34:14.481427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:48.060 [2024-07-23 03:34:14.568246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.060 [2024-07-23 03:34:14.568250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.318 03:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:48.318 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:48.319 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:48.319 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:48.319 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:48.319 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:48.319 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:48.319 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:48.319 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:48.319 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:48.319 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:48.319 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:48.319 ' 00:34:50.849 [2024-07-23 03:34:17.231243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.222 [2024-07-23 03:34:18.471556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:54.754 [2024-07-23 03:34:20.766791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:56.647 [2024-07-23 03:34:22.724951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:58.019 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:58.019 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:58.019 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:58.019 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:58.019 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:58.019 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:58.019 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:58.019 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:58.019 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:58.019 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:58.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:58.019 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:58.019 03:34:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:58.019 03:34:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:58.019 03:34:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.019 03:34:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:58.019 03:34:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:58.019 03:34:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.019 03:34:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:58.019 03:34:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:58.277 03:34:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:58.277 03:34:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:58.277 03:34:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:58.277 03:34:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:58.278 03:34:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.278 03:34:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:58.278 03:34:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:58.278 03:34:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.278 03:34:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:58.278 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:58.278 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:58.278 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:58.278 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:58.278 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:58.278 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:58.278 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:58.278 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:58.278 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:58.278 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:58.278 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:58.278 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:58.278 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:58.278 ' 00:35:03.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:03.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:03.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:03.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:03.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:03.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:03.546 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:03.547 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:03.547 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:03.547 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:03.547 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:03.547 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:03.547 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:03.547 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 598447 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 598447 ']' 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 598447 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 598447 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 598447' 00:35:03.547 killing process with pid 598447 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 598447 00:35:03.547 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 598447 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 598447 ']' 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 598447 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 598447 ']' 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 598447 00:35:03.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (598447) - No such process 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 598447 is not found' 00:35:03.805 Process with pid 598447 is not found 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:03.805 00:35:03.805 real 0m16.030s 00:35:03.805 user 0m33.909s 00:35:03.805 sys 0m0.813s 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:03.805 03:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.805 ************************************ 00:35:03.805 END TEST spdkcli_nvmf_tcp 00:35:03.805 ************************************ 00:35:03.805 03:34:30 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:03.805 03:34:30 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:03.805 03:34:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:03.805 03:34:30 -- common/autotest_common.sh@10 -- # set +x 00:35:04.064 ************************************ 00:35:04.064 START TEST nvmf_identify_passthru 00:35:04.064 ************************************ 00:35:04.064 03:34:30 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:04.064 * Looking for test storage... 00:35:04.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:04.064 03:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.064 03:34:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.064 03:34:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.064 03:34:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:04.064 03:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.064 03:34:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.064 03:34:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.064 03:34:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:04.064 03:34:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.064 03:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.064 03:34:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.064 03:34:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:04.064 03:34:30 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:04.064 03:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.967 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:05.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:05.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:05.968 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:05.968 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:05.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:35:05.968 00:35:05.968 --- 10.0.0.2 ping statistics --- 00:35:05.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.968 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:35:05.968 00:35:05.968 --- 10.0.0.1 ping statistics --- 00:35:05.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.968 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:05.968 03:34:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:05.968 03:34:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.968 03:34:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:05.968 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:35:06.228 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:35:06.228 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:35:06.228 03:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:35:06.228 03:34:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:06.228 03:34:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:06.228 03:34:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:06.228 03:34:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:06.228 03:34:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:06.228 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.410 03:34:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:10.410 03:34:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:10.410 03:34:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:10.410 03:34:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:10.410 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.592 03:34:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:14.592 03:34:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.592 03:34:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.592 03:34:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=602945 00:35:14.592 03:34:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:14.592 03:34:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:14.592 03:34:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 602945 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 602945 ']' 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:14.592 03:34:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.592 [2024-07-23 03:34:41.023077] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:14.592 [2024-07-23 03:34:41.023172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.592 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.592 [2024-07-23 03:34:41.098358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:14.850 [2024-07-23 03:34:41.192489] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:14.850 [2024-07-23 03:34:41.192543] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:14.850 [2024-07-23 03:34:41.192570] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:14.850 [2024-07-23 03:34:41.192584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:14.850 [2024-07-23 03:34:41.192596] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:14.850 [2024-07-23 03:34:41.192660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.850 [2024-07-23 03:34:41.192716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:14.850 [2024-07-23 03:34:41.192752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:14.850 [2024-07-23 03:34:41.192754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.850 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:14.850 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:35:14.850 03:34:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:14.850 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.851 INFO: Log level set to 20 00:35:14.851 INFO: Requests: 00:35:14.851 { 00:35:14.851 "jsonrpc": "2.0", 00:35:14.851 "method": "nvmf_set_config", 00:35:14.851 "id": 1, 00:35:14.851 "params": { 00:35:14.851 "admin_cmd_passthru": { 00:35:14.851 "identify_ctrlr": true 00:35:14.851 } 00:35:14.851 } 00:35:14.851 } 00:35:14.851 00:35:14.851 INFO: response: 00:35:14.851 { 00:35:14.851 "jsonrpc": "2.0", 00:35:14.851 "id": 1, 00:35:14.851 "result": true 00:35:14.851 } 00:35:14.851 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.851 03:34:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.851 INFO: Setting log level to 20 00:35:14.851 INFO: Setting log level to 20 00:35:14.851 INFO: Log level set to 20 00:35:14.851 INFO: Log level set to 20 00:35:14.851 INFO: Requests: 00:35:14.851 { 00:35:14.851 "jsonrpc": "2.0", 00:35:14.851 "method": "framework_start_init", 00:35:14.851 "id": 1 00:35:14.851 } 00:35:14.851 00:35:14.851 INFO: Requests: 00:35:14.851 { 00:35:14.851 "jsonrpc": "2.0", 00:35:14.851 "method": "framework_start_init", 00:35:14.851 "id": 1 00:35:14.851 } 00:35:14.851 00:35:14.851 [2024-07-23 03:34:41.336818] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:14.851 INFO: response: 00:35:14.851 { 00:35:14.851 "jsonrpc": "2.0", 00:35:14.851 "id": 1, 00:35:14.851 "result": true 00:35:14.851 } 00:35:14.851 00:35:14.851 INFO: response: 00:35:14.851 { 00:35:14.851 "jsonrpc": "2.0", 00:35:14.851 "id": 1, 00:35:14.851 "result": true 00:35:14.851 } 00:35:14.851 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.851 03:34:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.851 INFO: Setting log level to 40 00:35:14.851 INFO: Setting log level to 40 00:35:14.851 INFO: Setting log level to 40 00:35:14.851 [2024-07-23 03:34:41.346792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.851 03:34:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.851 03:34:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.851 03:34:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 Nvme0n1 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 [2024-07-23 03:34:44.234024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 [ 00:35:18.126 { 00:35:18.126 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:18.126 "subtype": "Discovery", 00:35:18.126 "listen_addresses": [], 00:35:18.126 "allow_any_host": true, 00:35:18.126 "hosts": [] 00:35:18.126 }, 00:35:18.126 { 00:35:18.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.126 "subtype": "NVMe", 00:35:18.126 "listen_addresses": [ 00:35:18.126 { 00:35:18.126 "trtype": "TCP", 00:35:18.126 "adrfam": "IPv4", 00:35:18.126 "traddr": "10.0.0.2", 00:35:18.126 "trsvcid": "4420" 00:35:18.126 } 00:35:18.126 ], 00:35:18.126 "allow_any_host": true, 00:35:18.126 "hosts": [], 00:35:18.126 "serial_number": "SPDK00000000000001", 00:35:18.126 "model_number": "SPDK bdev Controller", 00:35:18.126 "max_namespaces": 1, 00:35:18.126 "min_cntlid": 1, 00:35:18.126 "max_cntlid": 65519, 00:35:18.126 "namespaces": [ 00:35:18.126 { 00:35:18.126 "nsid": 1, 00:35:18.126 "bdev_name": "Nvme0n1", 00:35:18.126 "name": "Nvme0n1", 00:35:18.126 "nguid": "A0B9240DB4494DF78833B7EF44733DA9", 00:35:18.126 "uuid": "a0b9240d-b449-4df7-8833-b7ef44733da9" 00:35:18.126 } 00:35:18.126 ] 00:35:18.126 } 00:35:18.126 ] 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:18.126 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:18.126 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:18.126 03:34:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:18.126 rmmod nvme_tcp 00:35:18.126 rmmod nvme_fabrics 00:35:18.126 rmmod nvme_keyring 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 602945 ']' 00:35:18.126 03:34:44 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 602945 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 602945 ']' 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 602945 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:18.126 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 602945 00:35:18.383 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:18.383 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:18.383 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 602945' 00:35:18.383 killing process with pid 602945 00:35:18.383 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 602945 00:35:18.383 03:34:44 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 602945 00:35:19.756 03:34:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:19.756 03:34:46 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:19.756 03:34:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:19.756 03:34:46 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:19.756 03:34:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:19.756 03:34:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.756 03:34:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.756 03:34:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.318 03:34:48 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:22.318 00:35:22.318 real 0m17.912s 00:35:22.318 user 0m26.711s 00:35:22.318 sys 0m2.256s 00:35:22.318 03:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:22.318 03:34:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.318 ************************************ 00:35:22.318 END TEST nvmf_identify_passthru 00:35:22.318 ************************************ 00:35:22.318 03:34:48 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:22.318 03:34:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:22.318 03:34:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:22.318 03:34:48 -- common/autotest_common.sh@10 -- # set +x 00:35:22.318 ************************************ 00:35:22.318 START TEST nvmf_dif 00:35:22.318 ************************************ 00:35:22.318 03:34:48 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:22.318 * Looking for test storage... 00:35:22.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:22.318 03:34:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:22.318 03:34:48 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:22.318 03:34:48 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:22.318 03:34:48 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:22.318 03:34:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.318 03:34:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.318 03:34:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.318 03:34:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:22.318 03:34:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:22.318 03:34:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:22.318 03:34:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:22.318 03:34:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:22.318 03:34:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:22.318 03:34:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.318 03:34:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:22.318 03:34:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:22.318 03:34:48 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:22.318 03:34:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:24.220 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:24.220 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:24.220 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.220 03:34:50 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:24.221 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:24.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:24.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:35:24.221 00:35:24.221 --- 10.0.0.2 ping statistics --- 00:35:24.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.221 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:24.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:35:24.221 00:35:24.221 --- 10.0.0.1 ping statistics --- 00:35:24.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.221 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:24.221 03:34:50 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:25.155 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:25.155 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:25.155 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:25.155 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:25.155 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:25.155 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:25.155 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:25.155 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:25.155 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:25.155 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:25.155 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:25.155 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:25.155 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:25.155 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:25.155 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:25.156 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:25.156 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:25.156 03:34:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:25.156 03:34:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:25.156 03:34:51 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:25.156 03:34:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=606190 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:25.156 03:34:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 606190 00:35:25.156 03:34:51 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 606190 ']' 00:35:25.156 03:34:51 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.156 03:34:51 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:25.156 03:34:51 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.156 03:34:51 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:25.156 03:34:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.156 [2024-07-23 03:34:51.716150] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:25.156 [2024-07-23 03:34:51.716231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.414 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.414 [2024-07-23 03:34:51.785434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.414 [2024-07-23 03:34:51.876211] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.414 [2024-07-23 03:34:51.876268] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.414 [2024-07-23 03:34:51.876285] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.414 [2024-07-23 03:34:51.876299] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.414 [2024-07-23 03:34:51.876311] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.414 [2024-07-23 03:34:51.876346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.671 03:34:51 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:25.671 03:34:51 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:25.671 03:34:51 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:25.671 03:34:51 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.671 03:34:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.671 03:34:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:25.671 03:34:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:25.671 03:34:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:25.671 03:34:52 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.671 03:34:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.671 [2024-07-23 03:34:52.024383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.671 03:34:52 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.671 03:34:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:25.671 03:34:52 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:25.671 03:34:52 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:25.671 03:34:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.671 ************************************ 00:35:25.672 START TEST fio_dif_1_default 00:35:25.672 ************************************ 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.672 bdev_null0 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.672 [2024-07-23 03:34:52.084717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:25.672 { 00:35:25.672 "params": { 00:35:25.672 "name": "Nvme$subsystem", 00:35:25.672 "trtype": "$TEST_TRANSPORT", 00:35:25.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.672 "adrfam": "ipv4", 00:35:25.672 "trsvcid": "$NVMF_PORT", 00:35:25.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.672 "hdgst": ${hdgst:-false}, 00:35:25.672 "ddgst": ${ddgst:-false} 00:35:25.672 }, 00:35:25.672 "method": "bdev_nvme_attach_controller" 00:35:25.672 } 00:35:25.672 EOF 00:35:25.672 )") 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:25.672 "params": { 00:35:25.672 "name": "Nvme0", 00:35:25.672 "trtype": "tcp", 00:35:25.672 "traddr": "10.0.0.2", 00:35:25.672 "adrfam": "ipv4", 00:35:25.672 "trsvcid": "4420", 00:35:25.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.672 "hdgst": false, 00:35:25.672 "ddgst": false 00:35:25.672 }, 00:35:25.672 "method": "bdev_nvme_attach_controller" 00:35:25.672 }' 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:25.672 03:34:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.931 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:25.931 fio-3.35 00:35:25.931 Starting 1 thread 00:35:25.931 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.130 00:35:38.130 filename0: (groupid=0, jobs=1): err= 0: pid=606426: Tue Jul 23 03:35:03 2024 00:35:38.130 read: IOPS=187, BW=751KiB/s (769kB/s)(7536KiB/10031msec) 00:35:38.130 slat (nsec): min=4746, max=52011, avg=8964.03, stdev=3695.35 00:35:38.130 clat (usec): min=890, max=48123, avg=21267.70, stdev=20283.73 00:35:38.130 lat (usec): min=897, max=48144, avg=21276.66, stdev=20283.38 00:35:38.130 clat percentiles (usec): 00:35:38.130 | 1.00th=[ 906], 5.00th=[ 922], 10.00th=[ 922], 20.00th=[ 938], 00:35:38.130 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[41157], 60.00th=[41157], 00:35:38.130 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:38.130 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:35:38.130 | 99.99th=[47973] 00:35:38.130 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=752.00, stdev=28.43, samples=20 00:35:38.130 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:35:38.130 lat (usec) : 1000=44.27% 00:35:38.130 lat (msec) : 2=5.63%, 50=50.11% 00:35:38.130 cpu : usr=89.15%, sys=10.54%, ctx=24, majf=0, minf=242 00:35:38.130 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.130 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.130 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:38.130 00:35:38.130 Run status group 0 (all jobs): 00:35:38.130 READ: bw=751KiB/s (769kB/s), 751KiB/s-751KiB/s (769kB/s-769kB/s), io=7536KiB (7717kB), run=10031-10031msec 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.130 00:35:38.130 real 0m11.317s 00:35:38.130 user 0m10.283s 00:35:38.130 sys 0m1.359s 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.130 ************************************ 00:35:38.130 END TEST fio_dif_1_default 00:35:38.130 ************************************ 00:35:38.130 03:35:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:38.130 03:35:03 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:38.130 03:35:03 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:38.130 03:35:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:38.130 ************************************ 00:35:38.130 START TEST fio_dif_1_multi_subsystems 00:35:38.130 ************************************ 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.130 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.130 bdev_null0 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.131 [2024-07-23 03:35:03.449831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.131 bdev_null1 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:38.131 { 00:35:38.131 "params": { 00:35:38.131 "name": "Nvme$subsystem", 00:35:38.131 "trtype": "$TEST_TRANSPORT", 00:35:38.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.131 "adrfam": "ipv4", 00:35:38.131 "trsvcid": "$NVMF_PORT", 00:35:38.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.131 "hdgst": ${hdgst:-false}, 00:35:38.131 "ddgst": ${ddgst:-false} 00:35:38.131 }, 00:35:38.131 "method": "bdev_nvme_attach_controller" 00:35:38.131 } 00:35:38.131 EOF 00:35:38.131 )") 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:38.131 { 00:35:38.131 "params": { 00:35:38.131 "name": "Nvme$subsystem", 00:35:38.131 "trtype": "$TEST_TRANSPORT", 00:35:38.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.131 "adrfam": "ipv4", 00:35:38.131 "trsvcid": "$NVMF_PORT", 00:35:38.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.131 "hdgst": ${hdgst:-false}, 00:35:38.131 "ddgst": ${ddgst:-false} 00:35:38.131 }, 00:35:38.131 "method": "bdev_nvme_attach_controller" 00:35:38.131 } 00:35:38.131 EOF 00:35:38.131 )") 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:38.131 "params": { 00:35:38.131 "name": "Nvme0", 00:35:38.131 "trtype": "tcp", 00:35:38.131 "traddr": "10.0.0.2", 00:35:38.131 "adrfam": "ipv4", 00:35:38.131 "trsvcid": "4420", 00:35:38.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.131 "hdgst": false, 00:35:38.131 "ddgst": false 00:35:38.131 }, 00:35:38.131 "method": "bdev_nvme_attach_controller" 00:35:38.131 },{ 00:35:38.131 "params": { 00:35:38.131 "name": "Nvme1", 00:35:38.131 "trtype": "tcp", 00:35:38.131 "traddr": "10.0.0.2", 00:35:38.131 "adrfam": "ipv4", 00:35:38.131 "trsvcid": "4420", 00:35:38.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:38.131 "hdgst": false, 00:35:38.131 "ddgst": false 00:35:38.131 }, 00:35:38.131 "method": "bdev_nvme_attach_controller" 00:35:38.131 }' 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:38.131 03:35:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.131 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:38.131 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:38.131 fio-3.35 00:35:38.131 Starting 2 threads 00:35:38.131 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.099 00:35:48.099 filename0: (groupid=0, jobs=1): err= 0: pid=607843: Tue Jul 23 03:35:14 2024 00:35:48.099 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10019msec) 00:35:48.099 slat (nsec): min=7127, max=81980, avg=10688.66, stdev=5157.59 00:35:48.099 clat (usec): min=40882, max=42856, avg=41540.60, stdev=493.65 00:35:48.099 lat (usec): min=40895, max=42868, avg=41551.29, stdev=494.24 00:35:48.099 clat percentiles (usec): 00:35:48.099 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:48.099 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:48.099 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:48.099 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:48.099 | 99.99th=[42730] 00:35:48.099 bw ( KiB/s): min= 352, max= 416, per=49.90%, avg=384.00, stdev=10.38, samples=20 00:35:48.099 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:35:48.099 lat (msec) : 50=100.00% 00:35:48.099 cpu : usr=94.23%, sys=5.47%, ctx=32, majf=0, minf=169 00:35:48.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.099 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:48.099 filename1: (groupid=0, jobs=1): err= 0: pid=607844: Tue Jul 23 03:35:14 2024 00:35:48.099 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10021msec) 00:35:48.099 slat (nsec): min=7114, max=89776, avg=10774.75, stdev=5298.71 00:35:48.099 clat (usec): min=40888, max=43030, avg=41548.75, stdev=501.11 00:35:48.099 lat (usec): min=40896, max=43042, avg=41559.53, stdev=501.74 00:35:48.099 clat percentiles (usec): 00:35:48.099 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:48.099 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:48.099 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:48.099 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:35:48.099 | 99.99th=[43254] 00:35:48.099 bw ( KiB/s): min= 352, max= 416, per=49.90%, avg=384.00, stdev=10.38, samples=20 00:35:48.099 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:35:48.099 lat (msec) : 50=100.00% 00:35:48.099 cpu : usr=94.74%, sys=4.97%, ctx=23, majf=0, minf=144 00:35:48.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.099 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:48.099 00:35:48.099 Run status group 0 (all jobs): 00:35:48.099 READ: bw=770KiB/s (788kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=7712KiB (7897kB), run=10019-10021msec 00:35:48.358 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:48.358 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:48.358 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.358 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 00:35:48.359 real 0m11.331s 00:35:48.359 user 0m20.073s 00:35:48.359 sys 0m1.320s 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 ************************************ 00:35:48.359 END TEST fio_dif_1_multi_subsystems 00:35:48.359 ************************************ 00:35:48.359 03:35:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:48.359 03:35:14 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:48.359 03:35:14 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 ************************************ 00:35:48.359 START TEST fio_dif_rand_params 00:35:48.359 ************************************ 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 bdev_null0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 [2024-07-23 03:35:14.834543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.359 { 00:35:48.359 "params": { 00:35:48.359 "name": "Nvme$subsystem", 00:35:48.359 "trtype": "$TEST_TRANSPORT", 00:35:48.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.359 "adrfam": "ipv4", 00:35:48.359 "trsvcid": "$NVMF_PORT", 00:35:48.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.359 "hdgst": ${hdgst:-false}, 00:35:48.359 "ddgst": ${ddgst:-false} 00:35:48.359 }, 00:35:48.359 "method": "bdev_nvme_attach_controller" 00:35:48.359 } 00:35:48.359 EOF 00:35:48.359 )") 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:48.359 "params": { 00:35:48.359 "name": "Nvme0", 00:35:48.359 "trtype": "tcp", 00:35:48.359 "traddr": "10.0.0.2", 00:35:48.359 "adrfam": "ipv4", 00:35:48.359 "trsvcid": "4420", 00:35:48.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.359 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.359 "hdgst": false, 00:35:48.359 "ddgst": false 00:35:48.359 }, 00:35:48.359 "method": "bdev_nvme_attach_controller" 00:35:48.359 }' 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.359 03:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.617 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:48.617 ... 00:35:48.617 fio-3.35 00:35:48.617 Starting 3 threads 00:35:48.617 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.220 00:35:55.220 filename0: (groupid=0, jobs=1): err= 0: pid=609235: Tue Jul 23 03:35:20 2024 00:35:55.220 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(139MiB/5043msec) 00:35:55.220 slat (nsec): min=4497, max=85480, avg=14402.11, stdev=5270.14 00:35:55.220 clat (usec): min=5702, max=89618, avg=13631.92, stdev=11976.86 00:35:55.220 lat (usec): min=5721, max=89631, avg=13646.32, stdev=11976.94 00:35:55.220 clat percentiles (usec): 00:35:55.220 | 1.00th=[ 5997], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 8160], 00:35:55.220 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11207], 00:35:55.220 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14484], 95.00th=[51119], 00:35:55.220 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[89654], 00:35:55.220 | 99.99th=[89654] 00:35:55.220 bw ( KiB/s): min=24320, max=34304, per=39.36%, avg=28293.00, stdev=3308.97, samples=10 00:35:55.220 iops : min= 190, max= 268, avg=221.00, stdev=25.89, samples=10 00:35:55.220 lat (msec) : 10=48.65%, 20=42.78%, 50=2.08%, 100=6.50% 00:35:55.220 cpu : usr=91.59%, sys=7.89%, ctx=10, majf=0, minf=207 00:35:55.220 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.220 issued rwts: total=1108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.220 filename0: (groupid=0, jobs=1): err= 0: pid=609236: Tue Jul 23 03:35:20 2024 00:35:55.220 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(112MiB/5007msec) 00:35:55.220 slat (nsec): min=4859, max=49092, avg=17132.52, stdev=6513.77 00:35:55.220 clat (usec): min=5853, max=95792, avg=16719.17, stdev=13771.40 00:35:55.220 lat (usec): min=5873, max=95815, avg=16736.30, stdev=13771.54 00:35:55.220 clat percentiles (usec): 00:35:55.220 | 1.00th=[ 6390], 5.00th=[ 6718], 10.00th=[ 7635], 20.00th=[ 9241], 00:35:55.220 | 30.00th=[10814], 40.00th=[11863], 50.00th=[12649], 60.00th=[13566], 00:35:55.220 | 70.00th=[14222], 80.00th=[15401], 90.00th=[50594], 95.00th=[53740], 00:35:55.220 | 99.00th=[56361], 99.50th=[57934], 99.90th=[95945], 99.95th=[95945], 00:35:55.220 | 99.99th=[95945] 00:35:55.220 bw ( KiB/s): min=14848, max=28160, per=31.84%, avg=22886.40, stdev=5037.12, samples=10 00:35:55.220 iops : min= 116, max= 220, avg=178.80, stdev=39.35, samples=10 00:35:55.220 lat (msec) : 10=24.75%, 20=63.32%, 50=1.45%, 100=10.48% 00:35:55.220 cpu : usr=91.59%, sys=7.15%, ctx=238, majf=0, minf=73 00:35:55.220 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.220 issued rwts: total=897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.220 filename0: (groupid=0, jobs=1): err= 0: pid=609237: Tue Jul 23 03:35:20 2024 00:35:55.220 read: IOPS=164, BW=20.5MiB/s (21.5MB/s)(104MiB/5047msec) 00:35:55.220 slat (nsec): min=4562, max=41743, avg=13813.04, stdev=4515.06 00:35:55.220 clat (usec): min=6811, max=94679, avg=18190.62, stdev=15092.21 00:35:55.220 lat (usec): min=6824, max=94692, avg=18204.44, stdev=15092.32 00:35:55.220 clat percentiles (usec): 00:35:55.220 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[10028], 00:35:55.220 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13042], 60.00th=[13698], 00:35:55.220 | 70.00th=[14746], 80.00th=[15926], 90.00th=[51643], 95.00th=[53740], 00:35:55.220 | 99.00th=[56886], 99.50th=[59507], 99.90th=[94897], 99.95th=[94897], 00:35:55.220 | 99.99th=[94897] 00:35:55.220 bw ( KiB/s): min=14080, max=28416, per=29.41%, avg=21140.20, stdev=4898.62, samples=10 00:35:55.220 iops : min= 110, max= 222, avg=165.10, stdev=38.20, samples=10 00:35:55.220 lat (msec) : 10=20.27%, 20=65.38%, 50=2.05%, 100=12.30% 00:35:55.220 cpu : usr=93.34%, sys=6.24%, ctx=12, majf=0, minf=82 00:35:55.220 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.220 issued rwts: total=829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.220 00:35:55.220 Run status group 0 (all jobs): 00:35:55.220 READ: bw=70.2MiB/s (73.6MB/s), 20.5MiB/s-27.5MiB/s (21.5MB/s-28.8MB/s), io=354MiB (371MB), run=5007-5047msec 00:35:55.220 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:55.220 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:55.220 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.220 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.220 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:55.220 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 bdev_null0 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 [2024-07-23 03:35:20.983265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 bdev_null1 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 bdev_null2 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:55.221 { 00:35:55.221 "params": { 00:35:55.221 "name": "Nvme$subsystem", 00:35:55.221 "trtype": "$TEST_TRANSPORT", 00:35:55.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.221 "adrfam": "ipv4", 00:35:55.221 "trsvcid": "$NVMF_PORT", 00:35:55.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.221 "hdgst": ${hdgst:-false}, 00:35:55.221 "ddgst": ${ddgst:-false} 00:35:55.221 }, 00:35:55.221 "method": "bdev_nvme_attach_controller" 00:35:55.221 } 00:35:55.221 EOF 00:35:55.221 )") 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:55.221 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:55.221 { 00:35:55.221 "params": { 00:35:55.221 "name": "Nvme$subsystem", 00:35:55.221 "trtype": "$TEST_TRANSPORT", 00:35:55.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.221 "adrfam": "ipv4", 00:35:55.221 "trsvcid": "$NVMF_PORT", 00:35:55.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.221 "hdgst": ${hdgst:-false}, 00:35:55.221 "ddgst": ${ddgst:-false} 00:35:55.221 }, 00:35:55.221 "method": "bdev_nvme_attach_controller" 00:35:55.221 } 00:35:55.222 EOF 00:35:55.222 )") 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:55.222 { 00:35:55.222 "params": { 00:35:55.222 "name": "Nvme$subsystem", 00:35:55.222 "trtype": "$TEST_TRANSPORT", 00:35:55.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.222 "adrfam": "ipv4", 00:35:55.222 "trsvcid": "$NVMF_PORT", 00:35:55.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.222 "hdgst": ${hdgst:-false}, 00:35:55.222 "ddgst": ${ddgst:-false} 00:35:55.222 }, 00:35:55.222 "method": "bdev_nvme_attach_controller" 00:35:55.222 } 00:35:55.222 EOF 00:35:55.222 )") 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:55.222 "params": { 00:35:55.222 "name": "Nvme0", 00:35:55.222 "trtype": "tcp", 00:35:55.222 "traddr": "10.0.0.2", 00:35:55.222 "adrfam": "ipv4", 00:35:55.222 "trsvcid": "4420", 00:35:55.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.222 "hdgst": false, 00:35:55.222 "ddgst": false 00:35:55.222 }, 00:35:55.222 "method": "bdev_nvme_attach_controller" 00:35:55.222 },{ 00:35:55.222 "params": { 00:35:55.222 "name": "Nvme1", 00:35:55.222 "trtype": "tcp", 00:35:55.222 "traddr": "10.0.0.2", 00:35:55.222 "adrfam": "ipv4", 00:35:55.222 "trsvcid": "4420", 00:35:55.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:55.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:55.222 "hdgst": false, 00:35:55.222 "ddgst": false 00:35:55.222 }, 00:35:55.222 "method": "bdev_nvme_attach_controller" 00:35:55.222 },{ 00:35:55.222 "params": { 00:35:55.222 "name": "Nvme2", 00:35:55.222 "trtype": "tcp", 00:35:55.222 "traddr": "10.0.0.2", 00:35:55.222 "adrfam": "ipv4", 00:35:55.222 "trsvcid": "4420", 00:35:55.222 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:55.222 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:55.222 "hdgst": false, 00:35:55.222 "ddgst": false 00:35:55.222 }, 00:35:55.222 "method": "bdev_nvme_attach_controller" 00:35:55.222 }' 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:55.222 03:35:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.222 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.222 ... 00:35:55.222 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.222 ... 00:35:55.222 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.222 ... 00:35:55.222 fio-3.35 00:35:55.222 Starting 24 threads 00:35:55.222 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.422 00:36:07.422 filename0: (groupid=0, jobs=1): err= 0: pid=610104: Tue Jul 23 03:35:32 2024 00:36:07.422 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.5MiB/10015msec) 00:36:07.422 slat (nsec): min=5992, max=86405, avg=20356.17, stdev=16013.19 00:36:07.422 clat (usec): min=14378, max=49476, avg=33714.15, stdev=2012.06 00:36:07.422 lat (usec): min=14384, max=49524, avg=33734.50, stdev=2012.87 00:36:07.422 clat percentiles (usec): 00:36:07.422 | 1.00th=[26346], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:36:07.422 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:07.422 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:36:07.422 | 99.00th=[38011], 99.50th=[43779], 99.90th=[44827], 99.95th=[49546], 00:36:07.422 | 99.99th=[49546] 00:36:07.422 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1883.80, stdev=57.18, samples=20 00:36:07.422 iops : min= 448, max= 480, avg=470.95, stdev=14.30, samples=20 00:36:07.422 lat (msec) : 20=0.51%, 50=99.49% 00:36:07.422 cpu : usr=97.90%, sys=1.65%, ctx=23, majf=0, minf=9 00:36:07.422 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 issued rwts: total=4726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.422 filename0: (groupid=0, jobs=1): err= 0: pid=610105: Tue Jul 23 03:35:32 2024 00:36:07.422 read: IOPS=470, BW=1882KiB/s (1927kB/s)(18.4MiB/10011msec) 00:36:07.422 slat (usec): min=7, max=108, avg=39.62, stdev=21.41 00:36:07.422 clat (usec): min=18932, max=56736, avg=33668.19, stdev=2547.99 00:36:07.422 lat (usec): min=18968, max=56771, avg=33707.81, stdev=2547.99 00:36:07.422 clat percentiles (usec): 00:36:07.422 | 1.00th=[22152], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:36:07.422 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:36:07.422 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.422 | 99.00th=[43779], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:36:07.422 | 99.99th=[56886] 00:36:07.422 bw ( KiB/s): min= 1664, max= 1984, per=4.17%, avg=1875.37, stdev=77.03, samples=19 00:36:07.422 iops : min= 416, max= 496, avg=468.84, stdev=19.26, samples=19 00:36:07.422 lat (msec) : 20=0.34%, 50=99.07%, 100=0.59% 00:36:07.422 cpu : usr=98.29%, sys=1.25%, ctx=26, majf=0, minf=9 00:36:07.422 IO depths : 1=3.4%, 2=9.5%, 4=24.7%, 8=53.3%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:07.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 issued rwts: total=4710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.422 filename0: (groupid=0, jobs=1): err= 0: pid=610106: Tue Jul 23 03:35:32 2024 00:36:07.422 read: IOPS=469, BW=1878KiB/s (1923kB/s)(18.4MiB/10020msec) 00:36:07.422 slat (usec): min=8, max=111, avg=31.44, stdev=29.49 00:36:07.422 clat (usec): min=23582, max=46054, avg=33795.34, stdev=1548.38 00:36:07.422 lat (usec): min=23593, max=46075, avg=33826.77, stdev=1545.80 00:36:07.422 clat percentiles (usec): 00:36:07.422 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:36:07.422 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.422 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.422 | 99.00th=[43254], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:36:07.422 | 99.99th=[45876] 00:36:07.422 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1875.20, stdev=75.15, samples=20 00:36:07.422 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:36:07.422 lat (msec) : 50=100.00% 00:36:07.422 cpu : usr=98.18%, sys=1.41%, ctx=13, majf=0, minf=9 00:36:07.422 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.422 filename0: (groupid=0, jobs=1): err= 0: pid=610107: Tue Jul 23 03:35:32 2024 00:36:07.422 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10021msec) 00:36:07.422 slat (usec): min=7, max=115, avg=70.32, stdev=17.84 00:36:07.422 clat (usec): min=12492, max=44888, avg=33317.55, stdev=1963.96 00:36:07.422 lat (usec): min=12500, max=44969, avg=33387.87, stdev=1965.09 00:36:07.422 clat percentiles (usec): 00:36:07.422 | 1.00th=[29492], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:07.422 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:07.422 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:36:07.422 | 99.00th=[38011], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:36:07.422 | 99.99th=[44827] 00:36:07.422 bw ( KiB/s): min= 1792, max= 1923, per=4.19%, avg=1881.75, stdev=60.29, samples=20 00:36:07.422 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:36:07.422 lat (msec) : 20=0.49%, 50=99.51% 00:36:07.422 cpu : usr=98.12%, sys=1.43%, ctx=16, majf=0, minf=9 00:36:07.422 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:07.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.422 filename0: (groupid=0, jobs=1): err= 0: pid=610108: Tue Jul 23 03:35:32 2024 00:36:07.422 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10006msec) 00:36:07.422 slat (usec): min=8, max=101, avg=39.33, stdev=18.64 00:36:07.422 clat (usec): min=19212, max=75208, avg=33781.02, stdev=2813.51 00:36:07.422 lat (usec): min=19246, max=75226, avg=33820.35, stdev=2813.21 00:36:07.422 clat percentiles (usec): 00:36:07.422 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:36:07.422 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:36:07.422 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.422 | 99.00th=[42206], 99.50th=[44303], 99.90th=[74974], 99.95th=[74974], 00:36:07.422 | 99.99th=[74974] 00:36:07.422 bw ( KiB/s): min= 1539, max= 1920, per=4.15%, avg=1866.26, stdev=97.81, samples=19 00:36:07.422 iops : min= 384, max= 480, avg=466.53, stdev=24.59, samples=19 00:36:07.422 lat (msec) : 20=0.30%, 50=99.36%, 100=0.34% 00:36:07.422 cpu : usr=93.87%, sys=3.25%, ctx=244, majf=0, minf=9 00:36:07.422 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.422 filename0: (groupid=0, jobs=1): err= 0: pid=610109: Tue Jul 23 03:35:32 2024 00:36:07.422 read: IOPS=469, BW=1878KiB/s (1923kB/s)(18.4MiB/10020msec) 00:36:07.422 slat (usec): min=8, max=106, avg=34.99, stdev=20.02 00:36:07.422 clat (usec): min=23102, max=57511, avg=33770.63, stdev=1678.91 00:36:07.422 lat (usec): min=23126, max=57530, avg=33805.62, stdev=1676.69 00:36:07.422 clat percentiles (usec): 00:36:07.422 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:36:07.422 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.422 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.422 | 99.00th=[43254], 99.50th=[44827], 99.90th=[45876], 99.95th=[57410], 00:36:07.422 | 99.99th=[57410] 00:36:07.422 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1875.20, stdev=75.15, samples=20 00:36:07.422 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:36:07.422 lat (msec) : 50=99.91%, 100=0.09% 00:36:07.422 cpu : usr=94.70%, sys=3.01%, ctx=75, majf=0, minf=9 00:36:07.422 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.422 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.422 filename0: (groupid=0, jobs=1): err= 0: pid=610110: Tue Jul 23 03:35:32 2024 00:36:07.422 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10006msec) 00:36:07.422 slat (nsec): min=8522, max=77544, avg=31786.85, stdev=11765.11 00:36:07.422 clat (usec): min=23907, max=57847, avg=33864.18, stdev=1961.21 00:36:07.422 lat (usec): min=23919, max=57872, avg=33895.96, stdev=1960.70 00:36:07.423 clat percentiles (usec): 00:36:07.423 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:36:07.423 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.423 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.423 | 99.00th=[42730], 99.50th=[44303], 99.90th=[57934], 99.95th=[57934], 00:36:07.423 | 99.99th=[57934] 00:36:07.423 bw ( KiB/s): min= 1536, max= 1936, per=4.17%, avg=1872.84, stdev=97.54, samples=19 00:36:07.423 iops : min= 384, max= 484, avg=468.21, stdev=24.38, samples=19 00:36:07.423 lat (msec) : 50=99.62%, 100=0.38% 00:36:07.423 cpu : usr=97.73%, sys=1.65%, ctx=61, majf=0, minf=11 00:36:07.423 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:07.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.423 filename0: (groupid=0, jobs=1): err= 0: pid=610111: Tue Jul 23 03:35:32 2024 00:36:07.423 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10005msec) 00:36:07.423 slat (nsec): min=8632, max=92526, avg=34698.08, stdev=14259.69 00:36:07.423 clat (usec): min=19085, max=83231, avg=33833.02, stdev=2655.68 00:36:07.423 lat (usec): min=19120, max=83276, avg=33867.72, stdev=2656.52 00:36:07.423 clat percentiles (usec): 00:36:07.423 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:36:07.423 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.423 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.423 | 99.00th=[42730], 99.50th=[48497], 99.90th=[64226], 99.95th=[82314], 00:36:07.423 | 99.99th=[83362] 00:36:07.423 bw ( KiB/s): min= 1536, max= 1920, per=4.15%, avg=1866.11, stdev=98.37, samples=19 00:36:07.423 iops : min= 384, max= 480, avg=466.53, stdev=24.59, samples=19 00:36:07.423 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:36:07.423 cpu : usr=96.92%, sys=2.08%, ctx=155, majf=0, minf=9 00:36:07.423 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.423 filename1: (groupid=0, jobs=1): err= 0: pid=610112: Tue Jul 23 03:35:32 2024 00:36:07.423 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10025msec) 00:36:07.423 slat (nsec): min=6789, max=98583, avg=32587.22, stdev=16492.69 00:36:07.423 clat (usec): min=15294, max=53238, avg=33700.01, stdev=1980.36 00:36:07.423 lat (usec): min=15348, max=53265, avg=33732.59, stdev=1980.02 00:36:07.423 clat percentiles (usec): 00:36:07.423 | 1.00th=[30802], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:36:07.423 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.423 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.423 | 99.00th=[38536], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:36:07.423 | 99.99th=[53216] 00:36:07.423 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1881.60, stdev=60.18, samples=20 00:36:07.423 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:36:07.423 lat (msec) : 20=0.64%, 50=99.30%, 100=0.06% 00:36:07.423 cpu : usr=97.35%, sys=1.81%, ctx=176, majf=0, minf=9 00:36:07.423 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:07.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.423 filename1: (groupid=0, jobs=1): err= 0: pid=610113: Tue Jul 23 03:35:32 2024 00:36:07.423 read: IOPS=470, BW=1884KiB/s (1929kB/s)(18.4MiB/10022msec) 00:36:07.423 slat (nsec): min=7930, max=66349, avg=14838.62, stdev=9571.38 00:36:07.423 clat (usec): min=13267, max=44571, avg=33837.44, stdev=1743.52 00:36:07.423 lat (usec): min=13275, max=44593, avg=33852.28, stdev=1742.93 00:36:07.423 clat percentiles (usec): 00:36:07.423 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:36:07.423 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:07.423 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[36439], 00:36:07.423 | 99.00th=[38011], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:36:07.423 | 99.99th=[44827] 00:36:07.423 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1881.75, stdev=59.95, samples=20 00:36:07.423 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:36:07.423 lat (msec) : 20=0.34%, 50=99.66% 00:36:07.423 cpu : usr=98.29%, sys=1.30%, ctx=16, majf=0, minf=9 00:36:07.423 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.423 filename1: (groupid=0, jobs=1): err= 0: pid=610114: Tue Jul 23 03:35:32 2024 00:36:07.423 read: IOPS=468, BW=1875KiB/s (1920kB/s)(18.3MiB/10001msec) 00:36:07.423 slat (usec): min=7, max=368, avg=32.12, stdev=23.36 00:36:07.423 clat (usec): min=18072, max=60152, avg=33840.26, stdev=2964.87 00:36:07.423 lat (usec): min=18099, max=60180, avg=33872.38, stdev=2962.37 00:36:07.423 clat percentiles (usec): 00:36:07.423 | 1.00th=[23725], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:36:07.423 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.423 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[36963], 00:36:07.423 | 99.00th=[49021], 99.50th=[50594], 99.90th=[60031], 99.95th=[60031], 00:36:07.423 | 99.99th=[60031] 00:36:07.423 bw ( KiB/s): min= 1536, max= 1936, per=4.17%, avg=1872.84, stdev=96.51, samples=19 00:36:07.423 iops : min= 384, max= 484, avg=468.21, stdev=24.13, samples=19 00:36:07.423 lat (msec) : 20=0.90%, 50=98.44%, 100=0.66% 00:36:07.423 cpu : usr=88.45%, sys=5.17%, ctx=188, majf=0, minf=9 00:36:07.423 IO depths : 1=4.6%, 2=10.8%, 4=24.7%, 8=52.0%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:07.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.423 filename1: (groupid=0, jobs=1): err= 0: pid=610115: Tue Jul 23 03:35:32 2024 00:36:07.423 read: IOPS=472, BW=1889KiB/s (1934kB/s)(18.5MiB/10024msec) 00:36:07.423 slat (usec): min=6, max=127, avg=31.07, stdev=28.47 00:36:07.423 clat (usec): min=13429, max=52948, avg=33605.39, stdev=2343.64 00:36:07.423 lat (usec): min=13435, max=52963, avg=33636.46, stdev=2340.97 00:36:07.423 clat percentiles (usec): 00:36:07.423 | 1.00th=[25297], 5.00th=[32375], 10.00th=[32637], 20.00th=[33162], 00:36:07.423 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:07.423 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:36:07.423 | 99.00th=[40109], 99.50th=[44303], 99.90th=[48497], 99.95th=[48497], 00:36:07.423 | 99.99th=[52691] 00:36:07.423 bw ( KiB/s): min= 1792, max= 1923, per=4.20%, avg=1887.15, stdev=54.79, samples=20 00:36:07.423 iops : min= 448, max= 480, avg=471.75, stdev=13.67, samples=20 00:36:07.423 lat (msec) : 20=0.80%, 50=99.16%, 100=0.04% 00:36:07.423 cpu : usr=94.47%, sys=2.85%, ctx=50, majf=0, minf=11 00:36:07.423 IO depths : 1=5.2%, 2=11.4%, 4=24.9%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:07.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 issued rwts: total=4734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.423 filename1: (groupid=0, jobs=1): err= 0: pid=610116: Tue Jul 23 03:35:32 2024 00:36:07.423 read: IOPS=468, BW=1874KiB/s (1918kB/s)(18.3MiB/10009msec) 00:36:07.423 slat (usec): min=8, max=115, avg=37.47, stdev=21.73 00:36:07.423 clat (usec): min=23190, max=67960, avg=33801.55, stdev=2403.84 00:36:07.423 lat (usec): min=23209, max=67996, avg=33839.02, stdev=2402.89 00:36:07.423 clat percentiles (usec): 00:36:07.423 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:36:07.423 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.423 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.423 | 99.00th=[43254], 99.50th=[44827], 99.90th=[67634], 99.95th=[67634], 00:36:07.423 | 99.99th=[67634] 00:36:07.423 bw ( KiB/s): min= 1536, max= 1920, per=4.15%, avg=1866.11, stdev=98.37, samples=19 00:36:07.423 iops : min= 384, max= 480, avg=466.53, stdev=24.59, samples=19 00:36:07.423 lat (msec) : 50=99.66%, 100=0.34% 00:36:07.423 cpu : usr=98.37%, sys=1.22%, ctx=14, majf=0, minf=9 00:36:07.423 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.423 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.423 filename1: (groupid=0, jobs=1): err= 0: pid=610117: Tue Jul 23 03:35:32 2024 00:36:07.423 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10006msec) 00:36:07.423 slat (usec): min=9, max=170, avg=49.86, stdev=21.90 00:36:07.423 clat (usec): min=25167, max=57993, avg=33720.11, stdev=1884.51 00:36:07.423 lat (usec): min=25180, max=58008, avg=33769.97, stdev=1881.40 00:36:07.423 clat percentiles (usec): 00:36:07.424 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:36:07.424 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:07.424 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.424 | 99.00th=[42206], 99.50th=[44303], 99.90th=[57934], 99.95th=[57934], 00:36:07.424 | 99.99th=[57934] 00:36:07.424 bw ( KiB/s): min= 1536, max= 1920, per=4.17%, avg=1872.84, stdev=97.39, samples=19 00:36:07.424 iops : min= 384, max= 480, avg=468.21, stdev=24.35, samples=19 00:36:07.424 lat (msec) : 50=99.66%, 100=0.34% 00:36:07.424 cpu : usr=96.79%, sys=2.08%, ctx=59, majf=0, minf=9 00:36:07.424 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.424 filename1: (groupid=0, jobs=1): err= 0: pid=610118: Tue Jul 23 03:35:32 2024 00:36:07.424 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10012msec) 00:36:07.424 slat (usec): min=8, max=122, avg=71.49, stdev=17.68 00:36:07.424 clat (usec): min=14985, max=68958, avg=33348.69, stdev=3037.82 00:36:07.424 lat (usec): min=15011, max=68991, avg=33420.18, stdev=3037.98 00:36:07.424 clat percentiles (usec): 00:36:07.424 | 1.00th=[21103], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:07.424 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:07.424 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.424 | 99.00th=[44827], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:36:07.424 | 99.99th=[68682] 00:36:07.424 bw ( KiB/s): min= 1667, max= 2016, per=4.17%, avg=1873.84, stdev=82.01, samples=19 00:36:07.424 iops : min= 416, max= 504, avg=468.42, stdev=20.61, samples=19 00:36:07.424 lat (msec) : 20=0.72%, 50=98.73%, 100=0.55% 00:36:07.424 cpu : usr=98.21%, sys=1.34%, ctx=12, majf=0, minf=9 00:36:07.424 IO depths : 1=5.5%, 2=11.3%, 4=23.2%, 8=52.7%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:07.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 issued rwts: total=4716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.424 filename1: (groupid=0, jobs=1): err= 0: pid=610119: Tue Jul 23 03:35:32 2024 00:36:07.424 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10006msec) 00:36:07.424 slat (usec): min=7, max=105, avg=37.37, stdev=20.23 00:36:07.424 clat (usec): min=15089, max=63329, avg=33821.47, stdev=2696.83 00:36:07.424 lat (usec): min=15113, max=63353, avg=33858.84, stdev=2695.84 00:36:07.424 clat percentiles (usec): 00:36:07.424 | 1.00th=[30540], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:36:07.424 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.424 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.424 | 99.00th=[44303], 99.50th=[49021], 99.90th=[63177], 99.95th=[63177], 00:36:07.424 | 99.99th=[63177] 00:36:07.424 bw ( KiB/s): min= 1539, max= 1920, per=4.15%, avg=1866.26, stdev=96.79, samples=19 00:36:07.424 iops : min= 384, max= 480, avg=466.53, stdev=24.34, samples=19 00:36:07.424 lat (msec) : 20=0.38%, 50=99.23%, 100=0.38% 00:36:07.424 cpu : usr=98.08%, sys=1.50%, ctx=14, majf=0, minf=9 00:36:07.424 IO depths : 1=5.0%, 2=11.2%, 4=24.9%, 8=51.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:07.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.424 filename2: (groupid=0, jobs=1): err= 0: pid=610120: Tue Jul 23 03:35:32 2024 00:36:07.424 read: IOPS=468, BW=1875KiB/s (1920kB/s)(18.3MiB/10007msec) 00:36:07.424 slat (usec): min=7, max=105, avg=37.24, stdev=21.09 00:36:07.424 clat (usec): min=10708, max=64613, avg=33804.60, stdev=3018.81 00:36:07.424 lat (usec): min=10717, max=64644, avg=33841.84, stdev=3018.10 00:36:07.424 clat percentiles (usec): 00:36:07.424 | 1.00th=[27132], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:36:07.424 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.424 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[36963], 00:36:07.424 | 99.00th=[46924], 99.50th=[50070], 99.90th=[64750], 99.95th=[64750], 00:36:07.424 | 99.99th=[64750] 00:36:07.424 bw ( KiB/s): min= 1536, max= 1952, per=4.15%, avg=1867.79, stdev=97.45, samples=19 00:36:07.424 iops : min= 384, max= 488, avg=466.95, stdev=24.36, samples=19 00:36:07.424 lat (msec) : 20=0.34%, 50=99.13%, 100=0.53% 00:36:07.424 cpu : usr=98.05%, sys=1.53%, ctx=14, majf=0, minf=9 00:36:07.424 IO depths : 1=4.6%, 2=9.6%, 4=20.2%, 8=56.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:07.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 complete : 0=0.0%, 4=93.1%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 issued rwts: total=4692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.424 filename2: (groupid=0, jobs=1): err= 0: pid=610121: Tue Jul 23 03:35:32 2024 00:36:07.424 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10010msec) 00:36:07.424 slat (nsec): min=11532, max=83471, avg=33653.84, stdev=12622.91 00:36:07.424 clat (usec): min=25261, max=70030, avg=33853.92, stdev=2040.03 00:36:07.424 lat (usec): min=25282, max=70061, avg=33887.57, stdev=2040.99 00:36:07.424 clat percentiles (usec): 00:36:07.424 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:36:07.424 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.424 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.424 | 99.00th=[42206], 99.50th=[44303], 99.90th=[57934], 99.95th=[69731], 00:36:07.424 | 99.99th=[69731] 00:36:07.424 bw ( KiB/s): min= 1536, max= 1920, per=4.17%, avg=1874.40, stdev=95.05, samples=20 00:36:07.424 iops : min= 384, max= 480, avg=468.60, stdev=23.76, samples=20 00:36:07.424 lat (msec) : 50=99.66%, 100=0.34% 00:36:07.424 cpu : usr=95.65%, sys=2.54%, ctx=188, majf=0, minf=9 00:36:07.424 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:07.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.424 filename2: (groupid=0, jobs=1): err= 0: pid=610122: Tue Jul 23 03:35:32 2024 00:36:07.424 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.5MiB/10019msec) 00:36:07.424 slat (nsec): min=8372, max=84088, avg=32333.84, stdev=12259.38 00:36:07.424 clat (usec): min=15625, max=44530, avg=33650.19, stdev=1668.16 00:36:07.424 lat (usec): min=15681, max=44556, avg=33682.52, stdev=1666.26 00:36:07.424 clat percentiles (usec): 00:36:07.424 | 1.00th=[25297], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:36:07.424 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.424 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:36:07.424 | 99.00th=[38011], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:36:07.424 | 99.99th=[44303] 00:36:07.424 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1884.00, stdev=57.31, samples=20 00:36:07.424 iops : min= 448, max= 480, avg=471.00, stdev=14.33, samples=20 00:36:07.424 lat (msec) : 20=0.15%, 50=99.85% 00:36:07.424 cpu : usr=92.48%, sys=3.79%, ctx=1056, majf=0, minf=9 00:36:07.424 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:07.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 issued rwts: total=4726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.424 filename2: (groupid=0, jobs=1): err= 0: pid=610123: Tue Jul 23 03:35:32 2024 00:36:07.424 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.3MiB/10043msec) 00:36:07.424 slat (usec): min=7, max=109, avg=25.02, stdev=18.99 00:36:07.424 clat (usec): min=17575, max=74051, avg=34041.35, stdev=3671.47 00:36:07.424 lat (usec): min=17609, max=74083, avg=34066.37, stdev=3672.43 00:36:07.424 clat percentiles (usec): 00:36:07.424 | 1.00th=[24773], 5.00th=[32637], 10.00th=[33162], 20.00th=[33424], 00:36:07.424 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:07.424 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[38011], 00:36:07.424 | 99.00th=[45876], 99.50th=[53740], 99.90th=[73925], 99.95th=[73925], 00:36:07.424 | 99.99th=[73925] 00:36:07.424 bw ( KiB/s): min= 1520, max= 1968, per=4.17%, avg=1873.20, stdev=92.14, samples=20 00:36:07.424 iops : min= 380, max= 492, avg=468.30, stdev=23.04, samples=20 00:36:07.424 lat (msec) : 20=0.47%, 50=98.98%, 100=0.55% 00:36:07.424 cpu : usr=98.17%, sys=1.40%, ctx=18, majf=0, minf=9 00:36:07.424 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=78.1%, 16=17.1%, 32=0.0%, >=64=0.0% 00:36:07.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 complete : 0=0.0%, 4=90.1%, 8=8.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.424 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.424 filename2: (groupid=0, jobs=1): err= 0: pid=610124: Tue Jul 23 03:35:32 2024 00:36:07.424 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10004msec) 00:36:07.424 slat (usec): min=8, max=115, avg=37.02, stdev=20.12 00:36:07.424 clat (usec): min=19249, max=74171, avg=34277.19, stdev=3474.40 00:36:07.424 lat (usec): min=19277, max=74202, avg=34314.21, stdev=3473.24 00:36:07.424 clat percentiles (usec): 00:36:07.424 | 1.00th=[29754], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:36:07.424 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:07.424 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35914], 95.00th=[38536], 00:36:07.424 | 99.00th=[46924], 99.50th=[54789], 99.90th=[73925], 99.95th=[73925], 00:36:07.424 | 99.99th=[73925] 00:36:07.424 bw ( KiB/s): min= 1520, max= 1920, per=4.12%, avg=1850.95, stdev=94.21, samples=19 00:36:07.424 iops : min= 380, max= 480, avg=462.74, stdev=23.55, samples=19 00:36:07.424 lat (msec) : 20=0.19%, 50=99.25%, 100=0.56% 00:36:07.425 cpu : usr=91.06%, sys=4.10%, ctx=193, majf=0, minf=9 00:36:07.425 IO depths : 1=0.2%, 2=4.3%, 4=17.5%, 8=64.0%, 16=14.0%, 32=0.0%, >=64=0.0% 00:36:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.425 complete : 0=0.0%, 4=92.9%, 8=3.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.425 issued rwts: total=4636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.425 filename2: (groupid=0, jobs=1): err= 0: pid=610125: Tue Jul 23 03:35:32 2024 00:36:07.425 read: IOPS=469, BW=1879KiB/s (1924kB/s)(18.4MiB/10014msec) 00:36:07.425 slat (nsec): min=8794, max=56899, avg=26793.28, stdev=8056.93 00:36:07.425 clat (usec): min=14985, max=71606, avg=33822.49, stdev=2216.63 00:36:07.425 lat (usec): min=15016, max=71643, avg=33849.29, stdev=2216.28 00:36:07.425 clat percentiles (usec): 00:36:07.425 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:36:07.425 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:36:07.425 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.425 | 99.00th=[39584], 99.50th=[43779], 99.90th=[58459], 99.95th=[58459], 00:36:07.425 | 99.99th=[71828] 00:36:07.425 bw ( KiB/s): min= 1539, max= 1920, per=4.17%, avg=1873.00, stdev=96.82, samples=19 00:36:07.425 iops : min= 384, max= 480, avg=468.21, stdev=24.35, samples=19 00:36:07.425 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:36:07.425 cpu : usr=98.15%, sys=1.44%, ctx=14, majf=0, minf=9 00:36:07.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.425 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.425 filename2: (groupid=0, jobs=1): err= 0: pid=610126: Tue Jul 23 03:35:32 2024 00:36:07.425 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.5MiB/10037msec) 00:36:07.425 slat (nsec): min=7948, max=74869, avg=15066.53, stdev=10420.97 00:36:07.425 clat (usec): min=13451, max=48330, avg=33739.34, stdev=2026.40 00:36:07.425 lat (usec): min=13460, max=48344, avg=33754.40, stdev=2027.20 00:36:07.425 clat percentiles (usec): 00:36:07.425 | 1.00th=[26870], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:36:07.425 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:07.425 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:36:07.425 | 99.00th=[38536], 99.50th=[40109], 99.90th=[44303], 99.95th=[44303], 00:36:07.425 | 99.99th=[48497] 00:36:07.425 bw ( KiB/s): min= 1792, max= 1923, per=4.20%, avg=1887.95, stdev=56.85, samples=20 00:36:07.425 iops : min= 448, max= 480, avg=471.95, stdev=14.19, samples=20 00:36:07.425 lat (msec) : 20=0.72%, 50=99.28% 00:36:07.425 cpu : usr=98.12%, sys=1.46%, ctx=19, majf=0, minf=9 00:36:07.425 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.425 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.425 filename2: (groupid=0, jobs=1): err= 0: pid=610127: Tue Jul 23 03:35:32 2024 00:36:07.425 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10005msec) 00:36:07.425 slat (nsec): min=8059, max=81722, avg=25360.58, stdev=12626.11 00:36:07.425 clat (usec): min=25830, max=69929, avg=33940.39, stdev=1919.97 00:36:07.425 lat (usec): min=25866, max=69952, avg=33965.75, stdev=1919.10 00:36:07.425 clat percentiles (usec): 00:36:07.425 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:36:07.425 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:07.425 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:07.425 | 99.00th=[42730], 99.50th=[44303], 99.90th=[57934], 99.95th=[57934], 00:36:07.425 | 99.99th=[69731] 00:36:07.425 bw ( KiB/s): min= 1536, max= 1920, per=4.17%, avg=1872.84, stdev=97.39, samples=19 00:36:07.425 iops : min= 384, max= 480, avg=468.21, stdev=24.35, samples=19 00:36:07.425 lat (msec) : 50=99.66%, 100=0.34% 00:36:07.425 cpu : usr=98.17%, sys=1.43%, ctx=26, majf=0, minf=9 00:36:07.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.425 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.425 00:36:07.425 Run status group 0 (all jobs): 00:36:07.425 READ: bw=43.9MiB/s (46.0MB/s), 1854KiB/s-1889KiB/s (1898kB/s-1934kB/s), io=441MiB (462MB), run=10001-10043msec 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 bdev_null0 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.425 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.425 [2024-07-23 03:35:32.912979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.426 bdev_null1 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:07.426 { 00:36:07.426 "params": { 00:36:07.426 "name": "Nvme$subsystem", 00:36:07.426 "trtype": "$TEST_TRANSPORT", 00:36:07.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.426 "adrfam": "ipv4", 00:36:07.426 "trsvcid": "$NVMF_PORT", 00:36:07.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.426 "hdgst": ${hdgst:-false}, 00:36:07.426 "ddgst": ${ddgst:-false} 00:36:07.426 }, 00:36:07.426 "method": "bdev_nvme_attach_controller" 00:36:07.426 } 00:36:07.426 EOF 00:36:07.426 )") 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:07.426 { 00:36:07.426 "params": { 00:36:07.426 "name": "Nvme$subsystem", 00:36:07.426 "trtype": "$TEST_TRANSPORT", 00:36:07.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.426 "adrfam": "ipv4", 00:36:07.426 "trsvcid": "$NVMF_PORT", 00:36:07.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.426 "hdgst": ${hdgst:-false}, 00:36:07.426 "ddgst": ${ddgst:-false} 00:36:07.426 }, 00:36:07.426 "method": "bdev_nvme_attach_controller" 00:36:07.426 } 00:36:07.426 EOF 00:36:07.426 )") 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:07.426 "params": { 00:36:07.426 "name": "Nvme0", 00:36:07.426 "trtype": "tcp", 00:36:07.426 "traddr": "10.0.0.2", 00:36:07.426 "adrfam": "ipv4", 00:36:07.426 "trsvcid": "4420", 00:36:07.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:07.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:07.426 "hdgst": false, 00:36:07.426 "ddgst": false 00:36:07.426 }, 00:36:07.426 "method": "bdev_nvme_attach_controller" 00:36:07.426 },{ 00:36:07.426 "params": { 00:36:07.426 "name": "Nvme1", 00:36:07.426 "trtype": "tcp", 00:36:07.426 "traddr": "10.0.0.2", 00:36:07.426 "adrfam": "ipv4", 00:36:07.426 "trsvcid": "4420", 00:36:07.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:07.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:07.426 "hdgst": false, 00:36:07.426 "ddgst": false 00:36:07.426 }, 00:36:07.426 "method": "bdev_nvme_attach_controller" 00:36:07.426 }' 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:07.426 03:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.426 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:07.426 ... 00:36:07.426 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:07.426 ... 00:36:07.426 fio-3.35 00:36:07.426 Starting 4 threads 00:36:07.426 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.698 00:36:12.698 filename0: (groupid=0, jobs=1): err= 0: pid=611387: Tue Jul 23 03:35:38 2024 00:36:12.698 read: IOPS=1759, BW=13.7MiB/s (14.4MB/s)(68.8MiB/5001msec) 00:36:12.698 slat (nsec): min=4092, max=43751, avg=10490.18, stdev=3977.01 00:36:12.698 clat (usec): min=1247, max=7633, avg=4514.45, stdev=776.38 00:36:12.698 lat (usec): min=1255, max=7641, avg=4524.94, stdev=775.91 00:36:12.698 clat percentiles (usec): 00:36:12.698 | 1.00th=[ 3163], 5.00th=[ 3720], 10.00th=[ 3884], 20.00th=[ 4047], 00:36:12.698 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:12.698 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5866], 95.00th=[ 6390], 00:36:12.698 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[ 7635], 00:36:12.698 | 99.99th=[ 7635] 00:36:12.698 bw ( KiB/s): min=12921, max=15056, per=24.49%, avg=14074.50, stdev=677.67, samples=10 00:36:12.698 iops : min= 1615, max= 1882, avg=1759.30, stdev=84.73, samples=10 00:36:12.698 lat (msec) : 2=0.03%, 4=17.66%, 10=82.31% 00:36:12.698 cpu : usr=92.70%, sys=6.80%, ctx=6, majf=0, minf=9 00:36:12.698 IO depths : 1=0.1%, 2=0.8%, 4=71.7%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.698 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.698 issued rwts: total=8800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.698 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:12.698 filename0: (groupid=0, jobs=1): err= 0: pid=611388: Tue Jul 23 03:35:38 2024 00:36:12.698 read: IOPS=1817, BW=14.2MiB/s (14.9MB/s)(71.0MiB/5003msec) 00:36:12.698 slat (nsec): min=3786, max=42380, avg=11088.27, stdev=4169.42 00:36:12.698 clat (usec): min=2253, max=7520, avg=4368.77, stdev=758.81 00:36:12.698 lat (usec): min=2262, max=7533, avg=4379.86, stdev=758.40 00:36:12.698 clat percentiles (usec): 00:36:12.698 | 1.00th=[ 2999], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3884], 00:36:12.698 | 30.00th=[ 4015], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:12.698 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5538], 95.00th=[ 6194], 00:36:12.698 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 7373], 00:36:12.698 | 99.99th=[ 7504] 00:36:12.698 bw ( KiB/s): min=13600, max=14928, per=25.30%, avg=14542.00, stdev=402.32, samples=10 00:36:12.698 iops : min= 1700, max= 1866, avg=1817.70, stdev=50.34, samples=10 00:36:12.698 lat (msec) : 4=28.16%, 10=71.84% 00:36:12.698 cpu : usr=93.42%, sys=6.08%, ctx=20, majf=0, minf=0 00:36:12.698 IO depths : 1=0.1%, 2=0.8%, 4=70.3%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.698 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.698 issued rwts: total=9092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.698 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:12.698 filename1: (groupid=0, jobs=1): err= 0: pid=611389: Tue Jul 23 03:35:38 2024 00:36:12.698 read: IOPS=1862, BW=14.6MiB/s (15.3MB/s)(72.8MiB/5001msec) 00:36:12.698 slat (nsec): min=3798, max=47766, avg=11461.56, stdev=4335.55 00:36:12.698 clat (usec): min=1998, max=7010, avg=4260.43, stdev=502.09 00:36:12.698 lat (usec): min=2011, max=7019, avg=4271.89, stdev=502.07 00:36:12.698 clat percentiles (usec): 00:36:12.698 | 1.00th=[ 3064], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 3949], 00:36:12.698 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:12.698 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 5211], 00:36:12.698 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6718], 99.95th=[ 6915], 00:36:12.698 | 99.99th=[ 6980] 00:36:12.698 bw ( KiB/s): min=14192, max=15552, per=25.85%, avg=14860.44, stdev=427.44, samples=9 00:36:12.698 iops : min= 1774, max= 1944, avg=1857.56, stdev=53.43, samples=9 00:36:12.698 lat (msec) : 2=0.01%, 4=23.89%, 10=76.09% 00:36:12.698 cpu : usr=91.50%, sys=8.00%, ctx=16, majf=0, minf=9 00:36:12.698 IO depths : 1=0.1%, 2=1.2%, 4=70.4%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.698 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.698 issued rwts: total=9316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.698 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:12.698 filename1: (groupid=0, jobs=1): err= 0: pid=611390: Tue Jul 23 03:35:38 2024 00:36:12.698 read: IOPS=1747, BW=13.7MiB/s (14.3MB/s)(68.3MiB/5004msec) 00:36:12.698 slat (nsec): min=3709, max=42831, avg=10875.81, stdev=4051.76 00:36:12.698 clat (usec): min=2783, max=7638, avg=4544.93, stdev=785.24 00:36:12.698 lat (usec): min=2794, max=7646, avg=4555.80, stdev=785.03 00:36:12.698 clat percentiles (usec): 00:36:12.698 | 1.00th=[ 3458], 5.00th=[ 3785], 10.00th=[ 3916], 20.00th=[ 4080], 00:36:12.698 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:36:12.698 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5997], 95.00th=[ 6521], 00:36:12.698 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 7373], 99.95th=[ 7504], 00:36:12.698 | 99.99th=[ 7635] 00:36:12.698 bw ( KiB/s): min=13376, max=14720, per=24.32%, avg=13977.60, stdev=421.45, samples=10 00:36:12.698 iops : min= 1672, max= 1840, avg=1747.20, stdev=52.68, samples=10 00:36:12.698 lat (msec) : 4=15.26%, 10=84.74% 00:36:12.698 cpu : usr=92.80%, sys=6.74%, ctx=7, majf=0, minf=0 00:36:12.698 IO depths : 1=0.1%, 2=0.3%, 4=72.6%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.698 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.698 issued rwts: total=8744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.698 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:12.698 00:36:12.698 Run status group 0 (all jobs): 00:36:12.698 READ: bw=56.1MiB/s (58.9MB/s), 13.7MiB/s-14.6MiB/s (14.3MB/s-15.3MB/s), io=281MiB (295MB), run=5001-5004msec 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.698 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.699 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.699 03:35:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:12.699 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.699 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.699 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.699 00:36:12.699 real 0m24.368s 00:36:12.699 user 4m29.105s 00:36:12.699 sys 0m8.413s 00:36:12.699 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:12.699 03:35:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.699 ************************************ 00:36:12.699 END TEST fio_dif_rand_params 00:36:12.699 ************************************ 00:36:12.699 03:35:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:12.699 03:35:39 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:12.699 03:35:39 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:12.699 03:35:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:12.699 ************************************ 00:36:12.699 START TEST fio_dif_digest 00:36:12.699 ************************************ 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.699 bdev_null0 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.699 [2024-07-23 03:35:39.260649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:12.699 { 00:36:12.699 "params": { 00:36:12.699 "name": "Nvme$subsystem", 00:36:12.699 "trtype": "$TEST_TRANSPORT", 00:36:12.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:12.699 "adrfam": "ipv4", 00:36:12.699 "trsvcid": "$NVMF_PORT", 00:36:12.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:12.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:12.699 "hdgst": ${hdgst:-false}, 00:36:12.699 "ddgst": ${ddgst:-false} 00:36:12.699 }, 00:36:12.699 "method": "bdev_nvme_attach_controller" 00:36:12.699 } 00:36:12.699 EOF 00:36:12.699 )") 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:12.699 03:35:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:12.699 "params": { 00:36:12.699 "name": "Nvme0", 00:36:12.699 "trtype": "tcp", 00:36:12.699 "traddr": "10.0.0.2", 00:36:12.699 "adrfam": "ipv4", 00:36:12.699 "trsvcid": "4420", 00:36:12.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:12.699 "hdgst": true, 00:36:12.699 "ddgst": true 00:36:12.699 }, 00:36:12.699 "method": "bdev_nvme_attach_controller" 00:36:12.699 }' 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:12.958 03:35:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.958 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:12.958 ... 00:36:12.958 fio-3.35 00:36:12.958 Starting 3 threads 00:36:13.216 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.413 00:36:25.413 filename0: (groupid=0, jobs=1): err= 0: pid=612256: Tue Jul 23 03:35:50 2024 00:36:25.413 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(251MiB/10051msec) 00:36:25.413 slat (nsec): min=7230, max=38903, avg=14235.62, stdev=3674.79 00:36:25.413 clat (usec): min=9103, max=52951, avg=14985.01, stdev=1759.09 00:36:25.413 lat (usec): min=9116, max=52964, avg=14999.25, stdev=1759.20 00:36:25.413 clat percentiles (usec): 00:36:25.413 | 1.00th=[10683], 5.00th=[12780], 10.00th=[13304], 20.00th=[13960], 00:36:25.413 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:36:25.413 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:36:25.413 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19006], 99.95th=[50594], 00:36:25.413 | 99.99th=[52691] 00:36:25.413 bw ( KiB/s): min=24832, max=26880, per=34.60%, avg=25651.20, stdev=619.32, samples=20 00:36:25.413 iops : min= 194, max= 210, avg=200.40, stdev= 4.84, samples=20 00:36:25.413 lat (msec) : 10=0.45%, 20=99.45%, 100=0.10% 00:36:25.413 cpu : usr=90.79%, sys=8.74%, ctx=27, majf=0, minf=183 00:36:25.413 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.413 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.413 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.413 filename0: (groupid=0, jobs=1): err= 0: pid=612257: Tue Jul 23 03:35:50 2024 00:36:25.413 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(253MiB/10011msec) 00:36:25.413 slat (nsec): min=7540, max=82464, avg=14546.27, stdev=5145.05 00:36:25.413 clat (usec): min=9509, max=57188, avg=14812.33, stdev=2008.34 00:36:25.413 lat (usec): min=9522, max=57200, avg=14826.87, stdev=2008.50 00:36:25.413 clat percentiles (usec): 00:36:25.413 | 1.00th=[10945], 5.00th=[12780], 10.00th=[13304], 20.00th=[13960], 00:36:25.413 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:36:25.413 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:36:25.413 | 99.00th=[17171], 99.50th=[17695], 99.90th=[56361], 99.95th=[57410], 00:36:25.413 | 99.99th=[57410] 00:36:25.413 bw ( KiB/s): min=23808, max=27392, per=34.91%, avg=25881.60, stdev=791.88, samples=20 00:36:25.414 iops : min= 186, max= 214, avg=202.20, stdev= 6.19, samples=20 00:36:25.414 lat (msec) : 10=0.25%, 20=99.60%, 100=0.15% 00:36:25.414 cpu : usr=89.95%, sys=9.23%, ctx=51, majf=0, minf=185 00:36:25.414 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.414 issued rwts: total=2025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.414 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.414 filename0: (groupid=0, jobs=1): err= 0: pid=612258: Tue Jul 23 03:35:50 2024 00:36:25.414 read: IOPS=178, BW=22.4MiB/s (23.4MB/s)(224MiB/10009msec) 00:36:25.414 slat (nsec): min=7358, max=44362, avg=13657.10, stdev=3618.22 00:36:25.414 clat (usec): min=9855, max=61680, avg=16756.65, stdev=3669.85 00:36:25.414 lat (usec): min=9866, max=61693, avg=16770.31, stdev=3669.84 00:36:25.414 clat percentiles (usec): 00:36:25.414 | 1.00th=[13042], 5.00th=[14353], 10.00th=[14877], 20.00th=[15401], 00:36:25.414 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16450], 60.00th=[16909], 00:36:25.414 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 00:36:25.414 | 99.00th=[20317], 99.50th=[56886], 99.90th=[60031], 99.95th=[61604], 00:36:25.414 | 99.99th=[61604] 00:36:25.414 bw ( KiB/s): min=20736, max=24368, per=30.85%, avg=22876.00, stdev=1160.88, samples=20 00:36:25.414 iops : min= 162, max= 190, avg=178.70, stdev= 9.04, samples=20 00:36:25.414 lat (msec) : 10=0.06%, 20=98.49%, 50=0.78%, 100=0.67% 00:36:25.414 cpu : usr=90.88%, sys=8.46%, ctx=19, majf=0, minf=116 00:36:25.414 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.414 issued rwts: total=1790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.414 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.414 00:36:25.414 Run status group 0 (all jobs): 00:36:25.414 READ: bw=72.4MiB/s (75.9MB/s), 22.4MiB/s-25.3MiB/s (23.4MB/s-26.5MB/s), io=728MiB (763MB), run=10009-10051msec 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.414 00:36:25.414 real 0m11.212s 00:36:25.414 user 0m28.351s 00:36:25.414 sys 0m2.925s 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:25.414 03:35:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.414 ************************************ 00:36:25.414 END TEST fio_dif_digest 00:36:25.414 ************************************ 00:36:25.414 03:35:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:25.414 03:35:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:25.414 rmmod nvme_tcp 00:36:25.414 rmmod nvme_fabrics 00:36:25.414 rmmod nvme_keyring 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 606190 ']' 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 606190 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 606190 ']' 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 606190 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 606190 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 606190' 00:36:25.414 killing process with pid 606190 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@965 -- # kill 606190 00:36:25.414 03:35:50 nvmf_dif -- common/autotest_common.sh@970 -- # wait 606190 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:25.414 03:35:50 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:25.414 Waiting for block devices as requested 00:36:25.414 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:25.414 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:25.673 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:25.673 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:25.673 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:25.932 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:25.932 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:25.932 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:25.932 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:25.932 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:26.190 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:26.190 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:26.190 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:26.448 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:26.448 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:26.448 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:26.448 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:26.706 03:35:53 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:26.706 03:35:53 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:26.706 03:35:53 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:26.706 03:35:53 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:26.706 03:35:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.706 03:35:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:26.706 03:35:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.605 03:35:55 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:28.605 00:36:28.605 real 1m6.818s 00:36:28.605 user 6m24.868s 00:36:28.605 sys 0m20.676s 00:36:28.605 03:35:55 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:28.605 03:35:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:28.605 ************************************ 00:36:28.605 END TEST nvmf_dif 00:36:28.605 ************************************ 00:36:28.862 03:35:55 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:28.862 03:35:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:28.862 03:35:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:28.862 03:35:55 -- common/autotest_common.sh@10 -- # set +x 00:36:28.862 ************************************ 00:36:28.862 START TEST nvmf_abort_qd_sizes 00:36:28.862 ************************************ 00:36:28.862 03:35:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:28.862 * Looking for test storage... 00:36:28.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:28.863 03:35:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:30.782 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:30.782 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:30.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:30.782 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:30.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:30.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:36:30.782 00:36:30.782 --- 10.0.0.2 ping statistics --- 00:36:30.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.782 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:30.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:30.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:36:30.782 00:36:30.782 --- 10.0.0.1 ping statistics --- 00:36:30.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.782 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:30.782 03:35:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:32.176 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:32.176 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:32.176 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:32.176 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:32.176 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:32.176 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:32.176 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:32.176 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:32.177 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:32.177 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:32.177 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:32.177 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:32.177 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:32.177 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:32.177 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:32.177 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:33.111 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:33.111 03:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=617039 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 617039 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 617039 ']' 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:33.112 03:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:33.370 [2024-07-23 03:35:59.722903] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:33.370 [2024-07-23 03:35:59.723008] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:33.370 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.370 [2024-07-23 03:35:59.793748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:33.370 [2024-07-23 03:35:59.888226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:33.370 [2024-07-23 03:35:59.888292] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:33.370 [2024-07-23 03:35:59.888320] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:33.370 [2024-07-23 03:35:59.888333] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:33.370 [2024-07-23 03:35:59.888346] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:33.370 [2024-07-23 03:35:59.888441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.370 [2024-07-23 03:35:59.888496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:33.370 [2024-07-23 03:35:59.888641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:33.370 [2024-07-23 03:35:59.888644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:33.629 03:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:33.629 ************************************ 00:36:33.629 START TEST spdk_target_abort 00:36:33.629 ************************************ 00:36:33.629 03:36:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:33.629 03:36:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:33.629 03:36:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:33.630 03:36:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.630 03:36:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:36.913 spdk_targetn1 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:36.913 [2024-07-23 03:36:02.902548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:36.913 [2024-07-23 03:36:02.934819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:36.913 03:36:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:36.913 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.193 Initializing NVMe Controllers 00:36:40.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.193 Initialization complete. Launching workers. 00:36:40.193 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10417, failed: 0 00:36:40.193 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1200, failed to submit 9217 00:36:40.193 success 810, unsuccess 390, failed 0 00:36:40.193 03:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.193 03:36:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.193 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.473 Initializing NVMe Controllers 00:36:43.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:43.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:43.473 Initialization complete. Launching workers. 00:36:43.473 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8643, failed: 0 00:36:43.473 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 7404 00:36:43.473 success 337, unsuccess 902, failed 0 00:36:43.473 03:36:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:43.473 03:36:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:43.473 EAL: No free 2048 kB hugepages reported on node 1 00:36:46.755 Initializing NVMe Controllers 00:36:46.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:46.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:46.756 Initialization complete. Launching workers. 00:36:46.756 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31557, failed: 0 00:36:46.756 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2831, failed to submit 28726 00:36:46.756 success 510, unsuccess 2321, failed 0 00:36:46.756 03:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:46.756 03:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.756 03:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.756 03:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.756 03:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:46.756 03:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.756 03:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 617039 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 617039 ']' 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 617039 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 617039 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 617039' 00:36:47.688 killing process with pid 617039 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 617039 00:36:47.688 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 617039 00:36:47.947 00:36:47.947 real 0m14.234s 00:36:47.947 user 0m53.768s 00:36:47.947 sys 0m2.657s 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:47.947 ************************************ 00:36:47.947 END TEST spdk_target_abort 00:36:47.947 ************************************ 00:36:47.947 03:36:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:47.947 03:36:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:47.947 03:36:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:47.947 03:36:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:47.947 ************************************ 00:36:47.947 START TEST kernel_target_abort 00:36:47.947 ************************************ 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:47.947 03:36:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:48.882 Waiting for block devices as requested 00:36:48.882 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:49.141 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:49.141 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:49.400 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:49.400 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:49.400 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:49.400 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:49.659 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:49.659 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:49.659 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:49.659 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:49.917 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:49.917 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:49.917 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:49.917 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:50.175 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:50.175 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:50.175 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:50.434 No valid GPT data, bailing 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:50.434 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:50.434 00:36:50.434 Discovery Log Number of Records 2, Generation counter 2 00:36:50.435 =====Discovery Log Entry 0====== 00:36:50.435 trtype: tcp 00:36:50.435 adrfam: ipv4 00:36:50.435 subtype: current discovery subsystem 00:36:50.435 treq: not specified, sq flow control disable supported 00:36:50.435 portid: 1 00:36:50.435 trsvcid: 4420 00:36:50.435 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:50.435 traddr: 10.0.0.1 00:36:50.435 eflags: none 00:36:50.435 sectype: none 00:36:50.435 =====Discovery Log Entry 1====== 00:36:50.435 trtype: tcp 00:36:50.435 adrfam: ipv4 00:36:50.435 subtype: nvme subsystem 00:36:50.435 treq: not specified, sq flow control disable supported 00:36:50.435 portid: 1 00:36:50.435 trsvcid: 4420 00:36:50.435 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:50.435 traddr: 10.0.0.1 00:36:50.435 eflags: none 00:36:50.435 sectype: none 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:50.435 03:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:50.435 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.784 Initializing NVMe Controllers 00:36:53.784 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:53.784 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:53.784 Initialization complete. Launching workers. 00:36:53.784 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29298, failed: 0 00:36:53.784 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29298, failed to submit 0 00:36:53.784 success 0, unsuccess 29298, failed 0 00:36:53.784 03:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:53.784 03:36:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.784 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.068 Initializing NVMe Controllers 00:36:57.068 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:57.068 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:57.068 Initialization complete. Launching workers. 00:36:57.068 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59325, failed: 0 00:36:57.068 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14942, failed to submit 44383 00:36:57.068 success 0, unsuccess 14942, failed 0 00:36:57.068 03:36:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:57.068 03:36:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:57.068 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.350 Initializing NVMe Controllers 00:37:00.350 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:00.350 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:00.350 Initialization complete. Launching workers. 00:37:00.350 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57893, failed: 0 00:37:00.350 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14446, failed to submit 43447 00:37:00.350 success 0, unsuccess 14446, failed 0 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:00.350 03:36:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:00.916 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:00.916 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:00.916 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:00.916 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:00.916 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:00.916 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:00.916 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:00.916 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:00.916 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:00.916 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:00.916 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:00.916 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:00.916 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:01.174 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:01.174 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:01.174 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:02.112 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:02.112 00:37:02.112 real 0m14.196s 00:37:02.112 user 0m4.726s 00:37:02.112 sys 0m3.401s 00:37:02.112 03:36:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:02.112 03:36:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.112 ************************************ 00:37:02.112 END TEST kernel_target_abort 00:37:02.112 ************************************ 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:02.112 rmmod nvme_tcp 00:37:02.112 rmmod nvme_fabrics 00:37:02.112 rmmod nvme_keyring 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 617039 ']' 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 617039 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 617039 ']' 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 617039 00:37:02.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (617039) - No such process 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 617039 is not found' 00:37:02.112 Process with pid 617039 is not found 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:02.112 03:36:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:03.486 Waiting for block devices as requested 00:37:03.486 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:03.486 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:03.486 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:03.744 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:03.744 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:03.744 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:03.744 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:03.744 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:04.004 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:04.004 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:04.004 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:04.263 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:04.263 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:04.263 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:04.263 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:04.521 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:04.521 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:04.521 03:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:04.521 03:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:04.521 03:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:04.521 03:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:04.521 03:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.521 03:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:04.521 03:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.052 03:36:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:07.052 00:37:07.052 real 0m37.890s 00:37:07.052 user 1m0.619s 00:37:07.052 sys 0m9.432s 00:37:07.052 03:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:07.052 03:36:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:07.052 ************************************ 00:37:07.052 END TEST nvmf_abort_qd_sizes 00:37:07.052 ************************************ 00:37:07.052 03:36:33 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:07.052 03:36:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:07.052 03:36:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:07.052 03:36:33 -- common/autotest_common.sh@10 -- # set +x 00:37:07.052 ************************************ 00:37:07.052 START TEST keyring_file 00:37:07.052 ************************************ 00:37:07.052 03:36:33 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:07.052 * Looking for test storage... 00:37:07.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:07.052 03:36:33 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:07.052 03:36:33 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.052 03:36:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:07.052 03:36:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.053 03:36:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.053 03:36:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.053 03:36:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.053 03:36:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.053 03:36:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.053 03:36:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.053 03:36:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:07.053 03:36:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EcSnSoX1n0 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EcSnSoX1n0 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EcSnSoX1n0 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.EcSnSoX1n0 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6PLnuLCRgh 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:07.053 03:36:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6PLnuLCRgh 00:37:07.053 03:36:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6PLnuLCRgh 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6PLnuLCRgh 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=623406 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:07.053 03:36:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 623406 00:37:07.053 03:36:33 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 623406 ']' 00:37:07.053 03:36:33 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.053 03:36:33 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:07.053 03:36:33 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.053 03:36:33 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:07.053 03:36:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.053 [2024-07-23 03:36:33.377093] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:07.053 [2024-07-23 03:36:33.377188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623406 ] 00:37:07.053 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.053 [2024-07-23 03:36:33.436639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.053 [2024-07-23 03:36:33.526250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:07.311 03:36:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.311 [2024-07-23 03:36:33.782134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.311 null0 00:37:07.311 [2024-07-23 03:36:33.814194] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:07.311 [2024-07-23 03:36:33.814641] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:07.311 [2024-07-23 03:36:33.822207] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.311 03:36:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.311 [2024-07-23 03:36:33.834228] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:07.311 request: 00:37:07.311 { 00:37:07.311 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:07.311 "secure_channel": false, 00:37:07.311 "listen_address": { 00:37:07.311 "trtype": "tcp", 00:37:07.311 "traddr": "127.0.0.1", 00:37:07.311 "trsvcid": "4420" 00:37:07.311 }, 00:37:07.311 "method": "nvmf_subsystem_add_listener", 00:37:07.311 "req_id": 1 00:37:07.311 } 00:37:07.311 Got JSON-RPC error response 00:37:07.311 response: 00:37:07.311 { 00:37:07.311 "code": -32602, 00:37:07.311 "message": "Invalid parameters" 00:37:07.311 } 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:07.311 03:36:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=623419 00:37:07.311 03:36:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 623419 /var/tmp/bperf.sock 00:37:07.311 03:36:33 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 623419 ']' 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:07.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:07.311 03:36:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.311 [2024-07-23 03:36:33.882624] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:07.311 [2024-07-23 03:36:33.882705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623419 ] 00:37:07.569 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.569 [2024-07-23 03:36:33.940569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.569 [2024-07-23 03:36:34.026226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.569 03:36:34 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:07.569 03:36:34 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:07.569 03:36:34 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EcSnSoX1n0 00:37:07.569 03:36:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EcSnSoX1n0 00:37:07.827 03:36:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6PLnuLCRgh 00:37:07.827 03:36:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6PLnuLCRgh 00:37:08.085 03:36:34 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:08.085 03:36:34 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:08.085 03:36:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.085 03:36:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.085 03:36:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:08.343 03:36:34 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.EcSnSoX1n0 == \/\t\m\p\/\t\m\p\.\E\c\S\n\S\o\X\1\n\0 ]] 00:37:08.343 03:36:34 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:08.343 03:36:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:08.343 03:36:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.343 03:36:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.344 03:36:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:08.602 03:36:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.6PLnuLCRgh == \/\t\m\p\/\t\m\p\.\6\P\L\n\u\L\C\R\g\h ]] 00:37:08.602 03:36:35 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:08.602 03:36:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:08.602 03:36:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.602 03:36:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.602 03:36:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.602 03:36:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:08.861 03:36:35 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:08.861 03:36:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:08.861 03:36:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:08.861 03:36:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.861 03:36:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.861 03:36:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.861 03:36:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:09.119 03:36:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:09.119 03:36:35 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:09.119 03:36:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:09.377 [2024-07-23 03:36:35.853341] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:09.377 nvme0n1 00:37:09.377 03:36:35 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:09.377 03:36:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:09.377 03:36:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.377 03:36:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.377 03:36:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.377 03:36:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:09.635 03:36:36 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:09.635 03:36:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:09.635 03:36:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:09.635 03:36:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.635 03:36:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.635 03:36:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.635 03:36:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:09.894 03:36:36 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:09.894 03:36:36 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:10.152 Running I/O for 1 seconds... 00:37:11.086 00:37:11.086 Latency(us) 00:37:11.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.086 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:11.086 nvme0n1 : 1.03 4342.91 16.96 0.00 0.00 29049.59 9709.04 38447.79 00:37:11.086 =================================================================================================================== 00:37:11.086 Total : 4342.91 16.96 0.00 0.00 29049.59 9709.04 38447.79 00:37:11.086 0 00:37:11.086 03:36:37 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:11.086 03:36:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:11.348 03:36:37 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:11.348 03:36:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:11.348 03:36:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.348 03:36:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.348 03:36:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.348 03:36:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:11.649 03:36:38 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:11.649 03:36:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:11.649 03:36:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:11.649 03:36:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.650 03:36:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.650 03:36:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.650 03:36:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:11.908 03:36:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:11.908 03:36:38 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:11.908 03:36:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:11.908 03:36:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:11.908 03:36:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:11.908 03:36:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.908 03:36:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:11.908 03:36:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.908 03:36:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:11.908 03:36:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:12.167 [2024-07-23 03:36:38.559074] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:12.167 [2024-07-23 03:36:38.559143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a730 (107): Transport endpoint is not connected 00:37:12.167 [2024-07-23 03:36:38.560120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a730 (9): Bad file descriptor 00:37:12.167 [2024-07-23 03:36:38.561120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:12.167 [2024-07-23 03:36:38.561139] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:12.167 [2024-07-23 03:36:38.561167] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:12.167 request: 00:37:12.167 { 00:37:12.167 "name": "nvme0", 00:37:12.167 "trtype": "tcp", 00:37:12.167 "traddr": "127.0.0.1", 00:37:12.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:12.167 "adrfam": "ipv4", 00:37:12.167 "trsvcid": "4420", 00:37:12.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.167 "psk": "key1", 00:37:12.167 "method": "bdev_nvme_attach_controller", 00:37:12.167 "req_id": 1 00:37:12.167 } 00:37:12.167 Got JSON-RPC error response 00:37:12.167 response: 00:37:12.167 { 00:37:12.167 "code": -5, 00:37:12.167 "message": "Input/output error" 00:37:12.167 } 00:37:12.167 03:36:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:12.167 03:36:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:12.167 03:36:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:12.167 03:36:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:12.167 03:36:38 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:12.167 03:36:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.167 03:36:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.167 03:36:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.167 03:36:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.167 03:36:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.426 03:36:38 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:12.426 03:36:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:12.426 03:36:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:12.426 03:36:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.426 03:36:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.426 03:36:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.426 03:36:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:12.684 03:36:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:12.684 03:36:39 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:12.684 03:36:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:12.942 03:36:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:12.942 03:36:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:13.200 03:36:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:13.200 03:36:39 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:13.200 03:36:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.458 03:36:39 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:13.458 03:36:39 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.EcSnSoX1n0 00:37:13.458 03:36:39 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.EcSnSoX1n0 00:37:13.458 03:36:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:13.458 03:36:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.EcSnSoX1n0 00:37:13.458 03:36:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:13.458 03:36:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:13.458 03:36:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:13.458 03:36:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:13.458 03:36:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EcSnSoX1n0 00:37:13.458 03:36:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EcSnSoX1n0 00:37:13.716 [2024-07-23 03:36:40.050028] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EcSnSoX1n0': 0100660 00:37:13.716 [2024-07-23 03:36:40.050073] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:13.716 request: 00:37:13.716 { 00:37:13.716 "name": "key0", 00:37:13.716 "path": "/tmp/tmp.EcSnSoX1n0", 00:37:13.716 "method": "keyring_file_add_key", 00:37:13.716 "req_id": 1 00:37:13.716 } 00:37:13.716 Got JSON-RPC error response 00:37:13.716 response: 00:37:13.716 { 00:37:13.716 "code": -1, 00:37:13.716 "message": "Operation not permitted" 00:37:13.716 } 00:37:13.716 03:36:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:13.716 03:36:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:13.716 03:36:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:13.716 03:36:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:13.716 03:36:40 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.EcSnSoX1n0 00:37:13.716 03:36:40 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EcSnSoX1n0 00:37:13.716 03:36:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EcSnSoX1n0 00:37:13.975 03:36:40 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.EcSnSoX1n0 00:37:13.975 03:36:40 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:13.975 03:36:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.975 03:36:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.975 03:36:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.975 03:36:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.975 03:36:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:14.233 03:36:40 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:14.233 03:36:40 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.233 03:36:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:14.233 03:36:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.233 03:36:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:14.234 03:36:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.234 03:36:40 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:14.234 03:36:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.234 03:36:40 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.234 03:36:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.234 [2024-07-23 03:36:40.804060] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.EcSnSoX1n0': No such file or directory 00:37:14.234 [2024-07-23 03:36:40.804097] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:14.234 [2024-07-23 03:36:40.804129] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:14.234 [2024-07-23 03:36:40.804142] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:14.234 [2024-07-23 03:36:40.804156] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:14.234 request: 00:37:14.234 { 00:37:14.234 "name": "nvme0", 00:37:14.234 "trtype": "tcp", 00:37:14.234 "traddr": "127.0.0.1", 00:37:14.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:14.234 "adrfam": "ipv4", 00:37:14.234 "trsvcid": "4420", 00:37:14.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.234 "psk": "key0", 00:37:14.234 "method": "bdev_nvme_attach_controller", 00:37:14.234 "req_id": 1 00:37:14.234 } 00:37:14.234 Got JSON-RPC error response 00:37:14.234 response: 00:37:14.234 { 00:37:14.234 "code": -19, 00:37:14.234 "message": "No such device" 00:37:14.234 } 00:37:14.492 03:36:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:14.492 03:36:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:14.492 03:36:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:14.492 03:36:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:14.492 03:36:40 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:14.492 03:36:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:14.750 03:36:41 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:14.750 03:36:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:14.750 03:36:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:14.750 03:36:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:14.750 03:36:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:14.750 03:36:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:14.750 03:36:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.89mgukVbIv 00:37:14.750 03:36:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:14.750 03:36:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:14.750 03:36:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:14.751 03:36:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:14.751 03:36:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:14.751 03:36:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:14.751 03:36:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:14.751 03:36:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.89mgukVbIv 00:37:14.751 03:36:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.89mgukVbIv 00:37:14.751 03:36:41 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.89mgukVbIv 00:37:14.751 03:36:41 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.89mgukVbIv 00:37:14.751 03:36:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.89mgukVbIv 00:37:15.010 03:36:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.010 03:36:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.268 nvme0n1 00:37:15.268 03:36:41 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:15.268 03:36:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:15.268 03:36:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.268 03:36:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.268 03:36:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.268 03:36:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:15.527 03:36:41 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:15.527 03:36:41 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:15.527 03:36:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:15.785 03:36:42 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:15.785 03:36:42 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:15.785 03:36:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.785 03:36:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.785 03:36:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:16.041 03:36:42 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:16.041 03:36:42 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:16.041 03:36:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:16.041 03:36:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.041 03:36:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.042 03:36:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.042 03:36:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:16.300 03:36:42 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:16.300 03:36:42 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:16.300 03:36:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:16.558 03:36:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:16.558 03:36:42 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:16.558 03:36:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.817 03:36:43 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:16.817 03:36:43 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.89mgukVbIv 00:37:16.817 03:36:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.89mgukVbIv 00:37:17.076 03:36:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6PLnuLCRgh 00:37:17.076 03:36:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6PLnuLCRgh 00:37:17.336 03:36:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.336 03:36:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.595 nvme0n1 00:37:17.595 03:36:43 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:17.595 03:36:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:17.854 03:36:44 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:17.854 "subsystems": [ 00:37:17.854 { 00:37:17.854 "subsystem": "keyring", 00:37:17.854 "config": [ 00:37:17.854 { 00:37:17.854 "method": "keyring_file_add_key", 00:37:17.854 "params": { 00:37:17.854 "name": "key0", 00:37:17.854 "path": "/tmp/tmp.89mgukVbIv" 00:37:17.854 } 00:37:17.854 }, 00:37:17.854 { 00:37:17.854 "method": "keyring_file_add_key", 00:37:17.854 "params": { 00:37:17.854 "name": "key1", 00:37:17.854 "path": "/tmp/tmp.6PLnuLCRgh" 00:37:17.854 } 00:37:17.854 } 00:37:17.854 ] 00:37:17.854 }, 00:37:17.854 { 00:37:17.854 "subsystem": "iobuf", 00:37:17.854 "config": [ 00:37:17.854 { 00:37:17.854 "method": "iobuf_set_options", 00:37:17.854 "params": { 00:37:17.854 "small_pool_count": 8192, 00:37:17.854 "large_pool_count": 1024, 00:37:17.854 "small_bufsize": 8192, 00:37:17.854 "large_bufsize": 135168 00:37:17.854 } 00:37:17.854 } 00:37:17.854 ] 00:37:17.854 }, 00:37:17.854 { 00:37:17.854 "subsystem": "sock", 00:37:17.854 "config": [ 00:37:17.854 { 00:37:17.854 "method": "sock_set_default_impl", 00:37:17.854 "params": { 00:37:17.854 "impl_name": "posix" 00:37:17.854 } 00:37:17.854 }, 00:37:17.854 { 00:37:17.854 "method": "sock_impl_set_options", 00:37:17.854 "params": { 00:37:17.854 "impl_name": "ssl", 00:37:17.854 "recv_buf_size": 4096, 00:37:17.854 "send_buf_size": 4096, 00:37:17.854 "enable_recv_pipe": true, 00:37:17.854 "enable_quickack": false, 00:37:17.854 "enable_placement_id": 0, 00:37:17.854 "enable_zerocopy_send_server": true, 00:37:17.854 "enable_zerocopy_send_client": false, 00:37:17.854 "zerocopy_threshold": 0, 00:37:17.854 "tls_version": 0, 00:37:17.854 "enable_ktls": false 00:37:17.854 } 00:37:17.854 }, 00:37:17.854 { 00:37:17.854 "method": "sock_impl_set_options", 00:37:17.854 "params": { 00:37:17.854 "impl_name": "posix", 00:37:17.854 "recv_buf_size": 2097152, 00:37:17.854 "send_buf_size": 2097152, 00:37:17.854 "enable_recv_pipe": true, 00:37:17.854 "enable_quickack": false, 00:37:17.854 "enable_placement_id": 0, 00:37:17.854 "enable_zerocopy_send_server": true, 00:37:17.854 "enable_zerocopy_send_client": false, 00:37:17.854 "zerocopy_threshold": 0, 00:37:17.854 "tls_version": 0, 00:37:17.854 "enable_ktls": false 00:37:17.854 } 00:37:17.854 } 00:37:17.854 ] 00:37:17.854 }, 00:37:17.855 { 00:37:17.855 "subsystem": "vmd", 00:37:17.855 "config": [] 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "subsystem": "accel", 00:37:17.855 "config": [ 00:37:17.855 { 00:37:17.855 "method": "accel_set_options", 00:37:17.855 "params": { 00:37:17.855 "small_cache_size": 128, 00:37:17.855 "large_cache_size": 16, 00:37:17.855 "task_count": 2048, 00:37:17.855 "sequence_count": 2048, 00:37:17.855 "buf_count": 2048 00:37:17.855 } 00:37:17.855 } 00:37:17.855 ] 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "subsystem": "bdev", 00:37:17.855 "config": [ 00:37:17.855 { 00:37:17.855 "method": "bdev_set_options", 00:37:17.855 "params": { 00:37:17.855 "bdev_io_pool_size": 65535, 00:37:17.855 "bdev_io_cache_size": 256, 00:37:17.855 "bdev_auto_examine": true, 00:37:17.855 "iobuf_small_cache_size": 128, 00:37:17.855 "iobuf_large_cache_size": 16 00:37:17.855 } 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "method": "bdev_raid_set_options", 00:37:17.855 "params": { 00:37:17.855 "process_window_size_kb": 1024 00:37:17.855 } 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "method": "bdev_iscsi_set_options", 00:37:17.855 "params": { 00:37:17.855 "timeout_sec": 30 00:37:17.855 } 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "method": "bdev_nvme_set_options", 00:37:17.855 "params": { 00:37:17.855 "action_on_timeout": "none", 00:37:17.855 "timeout_us": 0, 00:37:17.855 "timeout_admin_us": 0, 00:37:17.855 "keep_alive_timeout_ms": 10000, 00:37:17.855 "arbitration_burst": 0, 00:37:17.855 "low_priority_weight": 0, 00:37:17.855 "medium_priority_weight": 0, 00:37:17.855 "high_priority_weight": 0, 00:37:17.855 "nvme_adminq_poll_period_us": 10000, 00:37:17.855 "nvme_ioq_poll_period_us": 0, 00:37:17.855 "io_queue_requests": 512, 00:37:17.855 "delay_cmd_submit": true, 00:37:17.855 "transport_retry_count": 4, 00:37:17.855 "bdev_retry_count": 3, 00:37:17.855 "transport_ack_timeout": 0, 00:37:17.855 "ctrlr_loss_timeout_sec": 0, 00:37:17.855 "reconnect_delay_sec": 0, 00:37:17.855 "fast_io_fail_timeout_sec": 0, 00:37:17.855 "disable_auto_failback": false, 00:37:17.855 "generate_uuids": false, 00:37:17.855 "transport_tos": 0, 00:37:17.855 "nvme_error_stat": false, 00:37:17.855 "rdma_srq_size": 0, 00:37:17.855 "io_path_stat": false, 00:37:17.855 "allow_accel_sequence": false, 00:37:17.855 "rdma_max_cq_size": 0, 00:37:17.855 "rdma_cm_event_timeout_ms": 0, 00:37:17.855 "dhchap_digests": [ 00:37:17.855 "sha256", 00:37:17.855 "sha384", 00:37:17.855 "sha512" 00:37:17.855 ], 00:37:17.855 "dhchap_dhgroups": [ 00:37:17.855 "null", 00:37:17.855 "ffdhe2048", 00:37:17.855 "ffdhe3072", 00:37:17.855 "ffdhe4096", 00:37:17.855 "ffdhe6144", 00:37:17.855 "ffdhe8192" 00:37:17.855 ] 00:37:17.855 } 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "method": "bdev_nvme_attach_controller", 00:37:17.855 "params": { 00:37:17.855 "name": "nvme0", 00:37:17.855 "trtype": "TCP", 00:37:17.855 "adrfam": "IPv4", 00:37:17.855 "traddr": "127.0.0.1", 00:37:17.855 "trsvcid": "4420", 00:37:17.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.855 "prchk_reftag": false, 00:37:17.855 "prchk_guard": false, 00:37:17.855 "ctrlr_loss_timeout_sec": 0, 00:37:17.855 "reconnect_delay_sec": 0, 00:37:17.855 "fast_io_fail_timeout_sec": 0, 00:37:17.855 "psk": "key0", 00:37:17.855 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:17.855 "hdgst": false, 00:37:17.855 "ddgst": false 00:37:17.855 } 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "method": "bdev_nvme_set_hotplug", 00:37:17.855 "params": { 00:37:17.855 "period_us": 100000, 00:37:17.855 "enable": false 00:37:17.855 } 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "method": "bdev_wait_for_examine" 00:37:17.855 } 00:37:17.855 ] 00:37:17.855 }, 00:37:17.855 { 00:37:17.855 "subsystem": "nbd", 00:37:17.855 "config": [] 00:37:17.855 } 00:37:17.855 ] 00:37:17.855 }' 00:37:17.855 03:36:44 keyring_file -- keyring/file.sh@114 -- # killprocess 623419 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 623419 ']' 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@950 -- # kill -0 623419 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 623419 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 623419' 00:37:17.855 killing process with pid 623419 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@965 -- # kill 623419 00:37:17.855 Received shutdown signal, test time was about 1.000000 seconds 00:37:17.855 00:37:17.855 Latency(us) 00:37:17.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.855 =================================================================================================================== 00:37:17.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:17.855 03:36:44 keyring_file -- common/autotest_common.sh@970 -- # wait 623419 00:37:18.114 03:36:44 keyring_file -- keyring/file.sh@117 -- # bperfpid=624872 00:37:18.114 03:36:44 keyring_file -- keyring/file.sh@119 -- # waitforlisten 624872 /var/tmp/bperf.sock 00:37:18.114 03:36:44 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 624872 ']' 00:37:18.114 03:36:44 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:18.114 03:36:44 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:18.114 03:36:44 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:18.114 03:36:44 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:18.114 "subsystems": [ 00:37:18.114 { 00:37:18.114 "subsystem": "keyring", 00:37:18.114 "config": [ 00:37:18.114 { 00:37:18.114 "method": "keyring_file_add_key", 00:37:18.114 "params": { 00:37:18.114 "name": "key0", 00:37:18.114 "path": "/tmp/tmp.89mgukVbIv" 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "keyring_file_add_key", 00:37:18.114 "params": { 00:37:18.114 "name": "key1", 00:37:18.114 "path": "/tmp/tmp.6PLnuLCRgh" 00:37:18.114 } 00:37:18.114 } 00:37:18.114 ] 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "subsystem": "iobuf", 00:37:18.114 "config": [ 00:37:18.114 { 00:37:18.114 "method": "iobuf_set_options", 00:37:18.114 "params": { 00:37:18.114 "small_pool_count": 8192, 00:37:18.114 "large_pool_count": 1024, 00:37:18.114 "small_bufsize": 8192, 00:37:18.114 "large_bufsize": 135168 00:37:18.114 } 00:37:18.114 } 00:37:18.114 ] 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "subsystem": "sock", 00:37:18.114 "config": [ 00:37:18.114 { 00:37:18.114 "method": "sock_set_default_impl", 00:37:18.114 "params": { 00:37:18.114 "impl_name": "posix" 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "sock_impl_set_options", 00:37:18.114 "params": { 00:37:18.114 "impl_name": "ssl", 00:37:18.114 "recv_buf_size": 4096, 00:37:18.114 "send_buf_size": 4096, 00:37:18.114 "enable_recv_pipe": true, 00:37:18.114 "enable_quickack": false, 00:37:18.114 "enable_placement_id": 0, 00:37:18.114 "enable_zerocopy_send_server": true, 00:37:18.114 "enable_zerocopy_send_client": false, 00:37:18.114 "zerocopy_threshold": 0, 00:37:18.114 "tls_version": 0, 00:37:18.114 "enable_ktls": false 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "sock_impl_set_options", 00:37:18.114 "params": { 00:37:18.114 "impl_name": "posix", 00:37:18.114 "recv_buf_size": 2097152, 00:37:18.114 "send_buf_size": 2097152, 00:37:18.114 "enable_recv_pipe": true, 00:37:18.114 "enable_quickack": false, 00:37:18.114 "enable_placement_id": 0, 00:37:18.114 "enable_zerocopy_send_server": true, 00:37:18.114 "enable_zerocopy_send_client": false, 00:37:18.114 "zerocopy_threshold": 0, 00:37:18.114 "tls_version": 0, 00:37:18.114 "enable_ktls": false 00:37:18.114 } 00:37:18.114 } 00:37:18.114 ] 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "subsystem": "vmd", 00:37:18.114 "config": [] 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "subsystem": "accel", 00:37:18.114 "config": [ 00:37:18.114 { 00:37:18.114 "method": "accel_set_options", 00:37:18.114 "params": { 00:37:18.114 "small_cache_size": 128, 00:37:18.114 "large_cache_size": 16, 00:37:18.114 "task_count": 2048, 00:37:18.114 "sequence_count": 2048, 00:37:18.114 "buf_count": 2048 00:37:18.114 } 00:37:18.114 } 00:37:18.114 ] 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "subsystem": "bdev", 00:37:18.114 "config": [ 00:37:18.114 { 00:37:18.114 "method": "bdev_set_options", 00:37:18.114 "params": { 00:37:18.114 "bdev_io_pool_size": 65535, 00:37:18.114 "bdev_io_cache_size": 256, 00:37:18.114 "bdev_auto_examine": true, 00:37:18.114 "iobuf_small_cache_size": 128, 00:37:18.114 "iobuf_large_cache_size": 16 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "bdev_raid_set_options", 00:37:18.114 "params": { 00:37:18.114 "process_window_size_kb": 1024 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "bdev_iscsi_set_options", 00:37:18.114 "params": { 00:37:18.114 "timeout_sec": 30 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "bdev_nvme_set_options", 00:37:18.114 "params": { 00:37:18.114 "action_on_timeout": "none", 00:37:18.114 "timeout_us": 0, 00:37:18.114 "timeout_admin_us": 0, 00:37:18.114 "keep_alive_timeout_ms": 10000, 00:37:18.114 "arbitration_burst": 0, 00:37:18.114 "low_priority_weight": 0, 00:37:18.114 "medium_priority_weight": 0, 00:37:18.114 "high_priority_weight": 0, 00:37:18.114 "nvme_adminq_poll_period_us": 10000, 00:37:18.114 "nvme_ioq_poll_period_us": 0, 00:37:18.114 "io_queue_requests": 512, 00:37:18.114 "delay_cmd_submit": true, 00:37:18.114 "transport_retry_count": 4, 00:37:18.114 "bdev_retry_count": 3, 00:37:18.114 "transport_ack_timeout": 0, 00:37:18.114 "ctrlr_loss_timeout_sec": 0, 00:37:18.114 "reconnect_delay_sec": 0, 00:37:18.114 "fast_io_fail_timeout_sec": 0, 00:37:18.114 "disable_auto_failback": false, 00:37:18.114 "generate_uuids": false, 00:37:18.114 "transport_tos": 0, 00:37:18.114 "nvme_error_stat": false, 00:37:18.114 "rdma_srq_size": 0, 00:37:18.114 "io_path_stat": false, 00:37:18.114 "allow_accel_sequence": false, 00:37:18.114 "rdma_max_cq_size": 0, 00:37:18.114 "rdma_cm_event_timeout_ms": 0, 00:37:18.114 "dhchap_digests": [ 00:37:18.114 "sha256", 00:37:18.114 "sha384", 00:37:18.114 "sha512" 00:37:18.114 ], 00:37:18.114 "dhchap_dhgroups": [ 00:37:18.114 "null", 00:37:18.114 "ffdhe2048", 00:37:18.114 "ffdhe3072", 00:37:18.114 "ffdhe4096", 00:37:18.114 "ffdhe6144", 00:37:18.114 "ffdhe8192" 00:37:18.114 ] 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "bdev_nvme_attach_controller", 00:37:18.114 "params": { 00:37:18.114 "name": "nvme0", 00:37:18.114 "trtype": "TCP", 00:37:18.114 "adrfam": "IPv4", 00:37:18.114 "traddr": "127.0.0.1", 00:37:18.114 "trsvcid": "4420", 00:37:18.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:18.114 "prchk_reftag": false, 00:37:18.114 "prchk_guard": false, 00:37:18.114 "ctrlr_loss_timeout_sec": 0, 00:37:18.114 "reconnect_delay_sec": 0, 00:37:18.114 "fast_io_fail_timeout_sec": 0, 00:37:18.114 "psk": "key0", 00:37:18.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:18.114 "hdgst": false, 00:37:18.114 "ddgst": false 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "bdev_nvme_set_hotplug", 00:37:18.114 "params": { 00:37:18.114 "period_us": 100000, 00:37:18.114 "enable": false 00:37:18.114 } 00:37:18.114 }, 00:37:18.114 { 00:37:18.114 "method": "bdev_wait_for_examine" 00:37:18.115 } 00:37:18.115 ] 00:37:18.115 }, 00:37:18.115 { 00:37:18.115 "subsystem": "nbd", 00:37:18.115 "config": [] 00:37:18.115 } 00:37:18.115 ] 00:37:18.115 }' 00:37:18.115 03:36:44 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:18.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:18.115 03:36:44 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:18.115 03:36:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:18.115 [2024-07-23 03:36:44.582511] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:18.115 [2024-07-23 03:36:44.582594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624872 ] 00:37:18.115 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.115 [2024-07-23 03:36:44.642463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.372 [2024-07-23 03:36:44.726488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.372 [2024-07-23 03:36:44.909824] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:18.941 03:36:45 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:18.941 03:36:45 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:18.941 03:36:45 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:18.941 03:36:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.941 03:36:45 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:19.199 03:36:45 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:19.199 03:36:45 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:19.199 03:36:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:19.199 03:36:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.199 03:36:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.457 03:36:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.457 03:36:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.457 03:36:46 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:19.457 03:36:46 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:19.457 03:36:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:19.457 03:36:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.457 03:36:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.457 03:36:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:19.457 03:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.715 03:36:46 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:19.715 03:36:46 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:19.715 03:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:19.715 03:36:46 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:19.973 03:36:46 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:19.973 03:36:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:19.973 03:36:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.89mgukVbIv /tmp/tmp.6PLnuLCRgh 00:37:19.973 03:36:46 keyring_file -- keyring/file.sh@20 -- # killprocess 624872 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 624872 ']' 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@950 -- # kill -0 624872 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 624872 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 624872' 00:37:19.973 killing process with pid 624872 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@965 -- # kill 624872 00:37:19.973 Received shutdown signal, test time was about 1.000000 seconds 00:37:19.973 00:37:19.973 Latency(us) 00:37:19.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.973 =================================================================================================================== 00:37:19.973 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:19.973 03:36:46 keyring_file -- common/autotest_common.sh@970 -- # wait 624872 00:37:20.231 03:36:46 keyring_file -- keyring/file.sh@21 -- # killprocess 623406 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 623406 ']' 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@950 -- # kill -0 623406 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 623406 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 623406' 00:37:20.231 killing process with pid 623406 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@965 -- # kill 623406 00:37:20.231 [2024-07-23 03:36:46.799741] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:20.231 03:36:46 keyring_file -- common/autotest_common.sh@970 -- # wait 623406 00:37:20.797 00:37:20.797 real 0m14.031s 00:37:20.797 user 0m34.553s 00:37:20.797 sys 0m3.267s 00:37:20.797 03:36:47 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:20.797 03:36:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:20.797 ************************************ 00:37:20.797 END TEST keyring_file 00:37:20.797 ************************************ 00:37:20.797 03:36:47 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:20.797 03:36:47 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:20.797 03:36:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:20.797 03:36:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:20.797 03:36:47 -- common/autotest_common.sh@10 -- # set +x 00:37:20.797 ************************************ 00:37:20.797 START TEST keyring_linux 00:37:20.797 ************************************ 00:37:20.798 03:36:47 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:20.798 * Looking for test storage... 00:37:20.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:20.798 03:36:47 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.798 03:36:47 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.798 03:36:47 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.798 03:36:47 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.798 03:36:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.798 03:36:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.798 03:36:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.798 03:36:47 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:20.798 03:36:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:20.798 03:36:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:20.798 03:36:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:20.798 03:36:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:20.798 03:36:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:20.798 03:36:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:20.798 03:36:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:20.798 /tmp/:spdk-test:key0 00:37:20.798 03:36:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:20.798 03:36:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:20.798 03:36:47 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:21.056 03:36:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:21.056 03:36:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:21.056 /tmp/:spdk-test:key1 00:37:21.056 03:36:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=625239 00:37:21.056 03:36:47 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:21.056 03:36:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 625239 00:37:21.056 03:36:47 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 625239 ']' 00:37:21.056 03:36:47 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.056 03:36:47 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:21.056 03:36:47 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.056 03:36:47 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:21.056 03:36:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:21.056 [2024-07-23 03:36:47.429647] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:21.056 [2024-07-23 03:36:47.429740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625239 ] 00:37:21.056 EAL: No free 2048 kB hugepages reported on node 1 00:37:21.056 [2024-07-23 03:36:47.486505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.056 [2024-07-23 03:36:47.575572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:21.314 03:36:47 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:21.314 03:36:47 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:21.314 03:36:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:21.314 03:36:47 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.314 03:36:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:21.314 [2024-07-23 03:36:47.814720] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.314 null0 00:37:21.314 [2024-07-23 03:36:47.846771] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:21.314 [2024-07-23 03:36:47.847267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:21.314 03:36:47 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.315 03:36:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:21.315 86141960 00:37:21.315 03:36:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:21.315 570003208 00:37:21.315 03:36:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=625292 00:37:21.315 03:36:47 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:21.315 03:36:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 625292 /var/tmp/bperf.sock 00:37:21.315 03:36:47 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 625292 ']' 00:37:21.315 03:36:47 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:21.315 03:36:47 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:21.315 03:36:47 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:21.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:21.315 03:36:47 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:21.315 03:36:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:21.572 [2024-07-23 03:36:47.915020] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:21.572 [2024-07-23 03:36:47.915101] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625292 ] 00:37:21.572 EAL: No free 2048 kB hugepages reported on node 1 00:37:21.572 [2024-07-23 03:36:47.979390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.572 [2024-07-23 03:36:48.070758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.572 03:36:48 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:21.572 03:36:48 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:21.572 03:36:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:21.572 03:36:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:21.830 03:36:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:21.830 03:36:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:22.398 03:36:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:22.398 03:36:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:22.398 [2024-07-23 03:36:48.921141] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:22.656 nvme0n1 00:37:22.656 03:36:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:22.656 03:36:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:22.656 03:36:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:22.656 03:36:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:22.656 03:36:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:22.656 03:36:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.913 03:36:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:22.913 03:36:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:22.913 03:36:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:22.913 03:36:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:22.913 03:36:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.913 03:36:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.913 03:36:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:23.171 03:36:49 keyring_linux -- keyring/linux.sh@25 -- # sn=86141960 00:37:23.171 03:36:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:23.171 03:36:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:23.171 03:36:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 86141960 == \8\6\1\4\1\9\6\0 ]] 00:37:23.171 03:36:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 86141960 00:37:23.171 03:36:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:23.171 03:36:49 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:23.171 Running I/O for 1 seconds... 00:37:24.108 00:37:24.108 Latency(us) 00:37:24.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.108 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:24.108 nvme0n1 : 1.03 3674.56 14.35 0.00 0.00 34424.06 11845.03 49127.73 00:37:24.108 =================================================================================================================== 00:37:24.108 Total : 3674.56 14.35 0.00 0.00 34424.06 11845.03 49127.73 00:37:24.108 0 00:37:24.108 03:36:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:24.108 03:36:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:24.365 03:36:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:24.365 03:36:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:24.365 03:36:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:24.365 03:36:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:24.365 03:36:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:24.365 03:36:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.621 03:36:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:24.621 03:36:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:24.621 03:36:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:24.621 03:36:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:24.621 03:36:51 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:24.621 03:36:51 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:24.621 03:36:51 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:24.621 03:36:51 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:24.621 03:36:51 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:24.621 03:36:51 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:24.621 03:36:51 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:24.621 03:36:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:24.879 [2024-07-23 03:36:51.399879] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:24.879 [2024-07-23 03:36:51.400406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91aea0 (107): Transport endpoint is not connected 00:37:24.879 [2024-07-23 03:36:51.401395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91aea0 (9): Bad file descriptor 00:37:24.879 [2024-07-23 03:36:51.402393] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:24.879 [2024-07-23 03:36:51.402415] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:24.879 [2024-07-23 03:36:51.402431] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:24.879 request: 00:37:24.879 { 00:37:24.879 "name": "nvme0", 00:37:24.879 "trtype": "tcp", 00:37:24.879 "traddr": "127.0.0.1", 00:37:24.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:24.879 "adrfam": "ipv4", 00:37:24.879 "trsvcid": "4420", 00:37:24.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.879 "psk": ":spdk-test:key1", 00:37:24.879 "method": "bdev_nvme_attach_controller", 00:37:24.879 "req_id": 1 00:37:24.879 } 00:37:24.879 Got JSON-RPC error response 00:37:24.879 response: 00:37:24.879 { 00:37:24.879 "code": -5, 00:37:24.879 "message": "Input/output error" 00:37:24.879 } 00:37:24.879 03:36:51 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:24.879 03:36:51 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:24.879 03:36:51 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:24.879 03:36:51 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@33 -- # sn=86141960 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 86141960 00:37:24.879 1 links removed 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@33 -- # sn=570003208 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 570003208 00:37:24.879 1 links removed 00:37:24.879 03:36:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 625292 00:37:24.879 03:36:51 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 625292 ']' 00:37:24.879 03:36:51 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 625292 00:37:24.879 03:36:51 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:24.880 03:36:51 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:24.880 03:36:51 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 625292 00:37:24.880 03:36:51 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:24.880 03:36:51 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:24.880 03:36:51 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 625292' 00:37:24.880 killing process with pid 625292 00:37:24.880 03:36:51 keyring_linux -- common/autotest_common.sh@965 -- # kill 625292 00:37:24.880 Received shutdown signal, test time was about 1.000000 seconds 00:37:24.880 00:37:24.880 Latency(us) 00:37:24.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.880 =================================================================================================================== 00:37:24.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:24.880 03:36:51 keyring_linux -- common/autotest_common.sh@970 -- # wait 625292 00:37:25.138 03:36:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 625239 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 625239 ']' 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 625239 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 625239 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 625239' 00:37:25.138 killing process with pid 625239 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@965 -- # kill 625239 00:37:25.138 03:36:51 keyring_linux -- common/autotest_common.sh@970 -- # wait 625239 00:37:25.731 00:37:25.731 real 0m4.807s 00:37:25.731 user 0m9.024s 00:37:25.731 sys 0m1.453s 00:37:25.731 03:36:52 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:25.731 03:36:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:25.731 ************************************ 00:37:25.731 END TEST keyring_linux 00:37:25.731 ************************************ 00:37:25.731 03:36:52 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:25.731 03:36:52 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:25.731 03:36:52 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:25.731 03:36:52 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:25.731 03:36:52 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:25.731 03:36:52 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:25.731 03:36:52 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:25.731 03:36:52 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:25.731 03:36:52 -- common/autotest_common.sh@10 -- # set +x 00:37:25.731 03:36:52 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:25.731 03:36:52 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:25.731 03:36:52 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:25.731 03:36:52 -- common/autotest_common.sh@10 -- # set +x 00:37:27.642 INFO: APP EXITING 00:37:27.642 INFO: killing all VMs 00:37:27.642 INFO: killing vhost app 00:37:27.642 INFO: EXIT DONE 00:37:28.581 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:28.581 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:28.581 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:28.581 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:28.581 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:28.581 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:28.581 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:28.581 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:28.581 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:28.581 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:28.581 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:28.581 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:28.581 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:28.581 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:28.581 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:28.581 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:28.581 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:29.960 Cleaning 00:37:29.960 Removing: /var/run/dpdk/spdk0/config 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:29.960 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:29.960 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:29.960 Removing: /var/run/dpdk/spdk1/config 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:29.960 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:29.960 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:29.960 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:29.960 Removing: /var/run/dpdk/spdk2/config 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:29.960 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:29.960 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:29.960 Removing: /var/run/dpdk/spdk3/config 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:29.960 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:29.960 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:29.960 Removing: /var/run/dpdk/spdk4/config 00:37:29.960 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:29.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:29.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:29.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:29.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:29.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:29.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:29.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:29.961 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:29.961 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:29.961 Removing: /dev/shm/bdev_svc_trace.1 00:37:29.961 Removing: /dev/shm/nvmf_trace.0 00:37:29.961 Removing: /dev/shm/spdk_tgt_trace.pid305360 00:37:29.961 Removing: /var/run/dpdk/spdk0 00:37:29.961 Removing: /var/run/dpdk/spdk1 00:37:29.961 Removing: /var/run/dpdk/spdk2 00:37:29.961 Removing: /var/run/dpdk/spdk3 00:37:29.961 Removing: /var/run/dpdk/spdk4 00:37:29.961 Removing: /var/run/dpdk/spdk_pid303800 00:37:29.961 Removing: /var/run/dpdk/spdk_pid304548 00:37:29.961 Removing: /var/run/dpdk/spdk_pid305360 00:37:29.961 Removing: /var/run/dpdk/spdk_pid305799 00:37:29.961 Removing: /var/run/dpdk/spdk_pid306486 00:37:29.961 Removing: /var/run/dpdk/spdk_pid306626 00:37:29.961 Removing: /var/run/dpdk/spdk_pid307344 00:37:29.961 Removing: /var/run/dpdk/spdk_pid307355 00:37:29.961 Removing: /var/run/dpdk/spdk_pid307597 00:37:29.961 Removing: /var/run/dpdk/spdk_pid308786 00:37:29.961 Removing: /var/run/dpdk/spdk_pid309834 00:37:29.961 Removing: /var/run/dpdk/spdk_pid310016 00:37:29.961 Removing: /var/run/dpdk/spdk_pid310253 00:37:29.961 Removing: /var/run/dpdk/spdk_pid310526 00:37:29.961 Removing: /var/run/dpdk/spdk_pid310716 00:37:29.961 Removing: /var/run/dpdk/spdk_pid310875 00:37:29.961 Removing: /var/run/dpdk/spdk_pid311036 00:37:29.961 Removing: /var/run/dpdk/spdk_pid311219 00:37:29.961 Removing: /var/run/dpdk/spdk_pid311672 00:37:29.961 Removing: /var/run/dpdk/spdk_pid314741 00:37:29.961 Removing: /var/run/dpdk/spdk_pid314926 00:37:29.961 Removing: /var/run/dpdk/spdk_pid315089 00:37:29.961 Removing: /var/run/dpdk/spdk_pid315092 00:37:29.961 Removing: /var/run/dpdk/spdk_pid315523 00:37:29.961 Removing: /var/run/dpdk/spdk_pid315532 00:37:29.961 Removing: /var/run/dpdk/spdk_pid315957 00:37:29.961 Removing: /var/run/dpdk/spdk_pid315966 00:37:29.961 Removing: /var/run/dpdk/spdk_pid316147 00:37:29.961 Removing: /var/run/dpdk/spdk_pid316266 00:37:29.961 Removing: /var/run/dpdk/spdk_pid316431 00:37:29.961 Removing: /var/run/dpdk/spdk_pid316442 00:37:29.961 Removing: /var/run/dpdk/spdk_pid316927 00:37:29.961 Removing: /var/run/dpdk/spdk_pid317088 00:37:29.961 Removing: /var/run/dpdk/spdk_pid317282 00:37:29.961 Removing: /var/run/dpdk/spdk_pid317451 00:37:29.961 Removing: /var/run/dpdk/spdk_pid317473 00:37:29.961 Removing: /var/run/dpdk/spdk_pid317658 00:37:29.961 Removing: /var/run/dpdk/spdk_pid317816 00:37:29.961 Removing: /var/run/dpdk/spdk_pid317973 00:37:29.961 Removing: /var/run/dpdk/spdk_pid318231 00:37:29.961 Removing: /var/run/dpdk/spdk_pid318403 00:37:29.961 Removing: /var/run/dpdk/spdk_pid318561 00:37:29.961 Removing: /var/run/dpdk/spdk_pid318712 00:37:29.961 Removing: /var/run/dpdk/spdk_pid318986 00:37:29.961 Removing: /var/run/dpdk/spdk_pid319144 00:37:29.961 Removing: /var/run/dpdk/spdk_pid319304 00:37:29.961 Removing: /var/run/dpdk/spdk_pid319460 00:37:29.961 Removing: /var/run/dpdk/spdk_pid319736 00:37:29.961 Removing: /var/run/dpdk/spdk_pid319894 00:37:29.961 Removing: /var/run/dpdk/spdk_pid320046 00:37:29.961 Removing: /var/run/dpdk/spdk_pid320239 00:37:29.961 Removing: /var/run/dpdk/spdk_pid320475 00:37:29.961 Removing: /var/run/dpdk/spdk_pid320640 00:37:29.961 Removing: /var/run/dpdk/spdk_pid320794 00:37:29.961 Removing: /var/run/dpdk/spdk_pid321071 00:37:29.961 Removing: /var/run/dpdk/spdk_pid321228 00:37:29.961 Removing: /var/run/dpdk/spdk_pid321392 00:37:29.961 Removing: /var/run/dpdk/spdk_pid321568 00:37:29.961 Removing: /var/run/dpdk/spdk_pid321772 00:37:29.961 Removing: /var/run/dpdk/spdk_pid323841 00:37:29.961 Removing: /var/run/dpdk/spdk_pid377523 00:37:29.961 Removing: /var/run/dpdk/spdk_pid380021 00:37:29.961 Removing: /var/run/dpdk/spdk_pid386974 00:37:29.961 Removing: /var/run/dpdk/spdk_pid390260 00:37:29.961 Removing: /var/run/dpdk/spdk_pid392609 00:37:29.961 Removing: /var/run/dpdk/spdk_pid393017 00:37:29.961 Removing: /var/run/dpdk/spdk_pid400247 00:37:29.961 Removing: /var/run/dpdk/spdk_pid400250 00:37:29.961 Removing: /var/run/dpdk/spdk_pid400902 00:37:29.961 Removing: /var/run/dpdk/spdk_pid401448 00:37:29.961 Removing: /var/run/dpdk/spdk_pid402101 00:37:29.961 Removing: /var/run/dpdk/spdk_pid402523 00:37:29.961 Removing: /var/run/dpdk/spdk_pid402546 00:37:29.961 Removing: /var/run/dpdk/spdk_pid402886 00:37:29.961 Removing: /var/run/dpdk/spdk_pid402909 00:37:29.961 Removing: /var/run/dpdk/spdk_pid403027 00:37:29.961 Removing: /var/run/dpdk/spdk_pid403942 00:37:29.961 Removing: /var/run/dpdk/spdk_pid404726 00:37:29.961 Removing: /var/run/dpdk/spdk_pid405381 00:37:29.961 Removing: /var/run/dpdk/spdk_pid405776 00:37:29.961 Removing: /var/run/dpdk/spdk_pid405784 00:37:29.961 Removing: /var/run/dpdk/spdk_pid405926 00:37:29.961 Removing: /var/run/dpdk/spdk_pid406807 00:37:29.961 Removing: /var/run/dpdk/spdk_pid407524 00:37:29.961 Removing: /var/run/dpdk/spdk_pid412871 00:37:29.961 Removing: /var/run/dpdk/spdk_pid413148 00:37:29.961 Removing: /var/run/dpdk/spdk_pid415653 00:37:29.961 Removing: /var/run/dpdk/spdk_pid419341 00:37:29.961 Removing: /var/run/dpdk/spdk_pid421511 00:37:29.961 Removing: /var/run/dpdk/spdk_pid427775 00:37:29.961 Removing: /var/run/dpdk/spdk_pid432960 00:37:29.961 Removing: /var/run/dpdk/spdk_pid434155 00:37:29.961 Removing: /var/run/dpdk/spdk_pid434936 00:37:29.961 Removing: /var/run/dpdk/spdk_pid445496 00:37:29.961 Removing: /var/run/dpdk/spdk_pid447705 00:37:29.961 Removing: /var/run/dpdk/spdk_pid472885 00:37:29.961 Removing: /var/run/dpdk/spdk_pid475673 00:37:29.961 Removing: /var/run/dpdk/spdk_pid476848 00:37:29.961 Removing: /var/run/dpdk/spdk_pid478047 00:37:29.961 Removing: /var/run/dpdk/spdk_pid478179 00:37:29.961 Removing: /var/run/dpdk/spdk_pid478320 00:37:29.961 Removing: /var/run/dpdk/spdk_pid478454 00:37:29.961 Removing: /var/run/dpdk/spdk_pid478770 00:37:29.961 Removing: /var/run/dpdk/spdk_pid480082 00:37:29.961 Removing: /var/run/dpdk/spdk_pid480803 00:37:29.961 Removing: /var/run/dpdk/spdk_pid481116 00:37:30.221 Removing: /var/run/dpdk/spdk_pid482727 00:37:30.221 Removing: /var/run/dpdk/spdk_pid483149 00:37:30.221 Removing: /var/run/dpdk/spdk_pid483595 00:37:30.221 Removing: /var/run/dpdk/spdk_pid486059 00:37:30.221 Removing: /var/run/dpdk/spdk_pid489354 00:37:30.221 Removing: /var/run/dpdk/spdk_pid493451 00:37:30.221 Removing: /var/run/dpdk/spdk_pid516400 00:37:30.221 Removing: /var/run/dpdk/spdk_pid519155 00:37:30.221 Removing: /var/run/dpdk/spdk_pid523293 00:37:30.221 Removing: /var/run/dpdk/spdk_pid524305 00:37:30.221 Removing: /var/run/dpdk/spdk_pid525327 00:37:30.221 Removing: /var/run/dpdk/spdk_pid527870 00:37:30.221 Removing: /var/run/dpdk/spdk_pid530227 00:37:30.221 Removing: /var/run/dpdk/spdk_pid534426 00:37:30.221 Removing: /var/run/dpdk/spdk_pid534434 00:37:30.221 Removing: /var/run/dpdk/spdk_pid537203 00:37:30.221 Removing: /var/run/dpdk/spdk_pid537337 00:37:30.221 Removing: /var/run/dpdk/spdk_pid537472 00:37:30.221 Removing: /var/run/dpdk/spdk_pid537740 00:37:30.221 Removing: /var/run/dpdk/spdk_pid537777 00:37:30.221 Removing: /var/run/dpdk/spdk_pid538938 00:37:30.221 Removing: /var/run/dpdk/spdk_pid540113 00:37:30.221 Removing: /var/run/dpdk/spdk_pid541296 00:37:30.221 Removing: /var/run/dpdk/spdk_pid542471 00:37:30.221 Removing: /var/run/dpdk/spdk_pid543647 00:37:30.221 Removing: /var/run/dpdk/spdk_pid544825 00:37:30.221 Removing: /var/run/dpdk/spdk_pid548622 00:37:30.221 Removing: /var/run/dpdk/spdk_pid548958 00:37:30.221 Removing: /var/run/dpdk/spdk_pid550239 00:37:30.221 Removing: /var/run/dpdk/spdk_pid551103 00:37:30.221 Removing: /var/run/dpdk/spdk_pid555306 00:37:30.221 Removing: /var/run/dpdk/spdk_pid557168 00:37:30.221 Removing: /var/run/dpdk/spdk_pid560577 00:37:30.221 Removing: /var/run/dpdk/spdk_pid563885 00:37:30.221 Removing: /var/run/dpdk/spdk_pid569972 00:37:30.221 Removing: /var/run/dpdk/spdk_pid574329 00:37:30.221 Removing: /var/run/dpdk/spdk_pid574416 00:37:30.221 Removing: /var/run/dpdk/spdk_pid586734 00:37:30.221 Removing: /var/run/dpdk/spdk_pid587461 00:37:30.221 Removing: /var/run/dpdk/spdk_pid588053 00:37:30.221 Removing: /var/run/dpdk/spdk_pid588578 00:37:30.221 Removing: /var/run/dpdk/spdk_pid589051 00:37:30.221 Removing: /var/run/dpdk/spdk_pid589564 00:37:30.221 Removing: /var/run/dpdk/spdk_pid589968 00:37:30.221 Removing: /var/run/dpdk/spdk_pid590378 00:37:30.221 Removing: /var/run/dpdk/spdk_pid592753 00:37:30.221 Removing: /var/run/dpdk/spdk_pid593011 00:37:30.221 Removing: /var/run/dpdk/spdk_pid596793 00:37:30.221 Removing: /var/run/dpdk/spdk_pid596850 00:37:30.221 Removing: /var/run/dpdk/spdk_pid598447 00:37:30.221 Removing: /var/run/dpdk/spdk_pid603355 00:37:30.221 Removing: /var/run/dpdk/spdk_pid603360 00:37:30.221 Removing: /var/run/dpdk/spdk_pid606255 00:37:30.221 Removing: /var/run/dpdk/spdk_pid607664 00:37:30.221 Removing: /var/run/dpdk/spdk_pid609060 00:37:30.221 Removing: /var/run/dpdk/spdk_pid609917 00:37:30.221 Removing: /var/run/dpdk/spdk_pid611324 00:37:30.221 Removing: /var/run/dpdk/spdk_pid612078 00:37:30.221 Removing: /var/run/dpdk/spdk_pid617551 00:37:30.221 Removing: /var/run/dpdk/spdk_pid617848 00:37:30.221 Removing: /var/run/dpdk/spdk_pid618739 00:37:30.221 Removing: /var/run/dpdk/spdk_pid620291 00:37:30.221 Removing: /var/run/dpdk/spdk_pid620691 00:37:30.221 Removing: /var/run/dpdk/spdk_pid620976 00:37:30.221 Removing: /var/run/dpdk/spdk_pid623406 00:37:30.221 Removing: /var/run/dpdk/spdk_pid623419 00:37:30.221 Removing: /var/run/dpdk/spdk_pid624872 00:37:30.221 Removing: /var/run/dpdk/spdk_pid625239 00:37:30.221 Removing: /var/run/dpdk/spdk_pid625292 00:37:30.221 Clean 00:37:30.221 03:36:56 -- common/autotest_common.sh@1447 -- # return 0 00:37:30.221 03:36:56 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:30.221 03:36:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:30.221 03:36:56 -- common/autotest_common.sh@10 -- # set +x 00:37:30.479 03:36:56 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:30.479 03:36:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:30.479 03:36:56 -- common/autotest_common.sh@10 -- # set +x 00:37:30.479 03:36:56 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:30.479 03:36:56 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:30.479 03:36:56 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:30.479 03:36:56 -- spdk/autotest.sh@391 -- # hash lcov 00:37:30.479 03:36:56 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:30.479 03:36:56 -- spdk/autotest.sh@393 -- # hostname 00:37:30.479 03:36:56 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:30.479 geninfo: WARNING: invalid characters removed from testname! 00:38:02.552 03:37:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:02.552 03:37:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:05.830 03:37:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:08.355 03:37:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:11.663 03:37:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:14.188 03:37:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:17.466 03:37:43 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:17.466 03:37:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:17.466 03:37:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:17.466 03:37:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:17.466 03:37:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:17.466 03:37:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.466 03:37:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.466 03:37:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.466 03:37:43 -- paths/export.sh@5 -- $ export PATH 00:38:17.466 03:37:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.466 03:37:43 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:17.466 03:37:43 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:17.466 03:37:43 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721698663.XXXXXX 00:38:17.466 03:37:43 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721698663.H0rhPl 00:38:17.466 03:37:43 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:17.466 03:37:43 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:38:17.466 03:37:43 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:17.466 03:37:43 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:17.466 03:37:43 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:17.466 03:37:43 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:17.466 03:37:43 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:17.466 03:37:43 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:17.466 03:37:43 -- common/autotest_common.sh@10 -- $ set +x 00:38:17.466 03:37:43 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:17.466 03:37:43 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:17.466 03:37:43 -- pm/common@17 -- $ local monitor 00:38:17.466 03:37:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:17.466 03:37:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:17.466 03:37:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:17.466 03:37:43 -- pm/common@21 -- $ date +%s 00:38:17.466 03:37:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:17.466 03:37:43 -- pm/common@21 -- $ date +%s 00:38:17.466 03:37:43 -- pm/common@25 -- $ sleep 1 00:38:17.466 03:37:43 -- pm/common@21 -- $ date +%s 00:38:17.466 03:37:43 -- pm/common@21 -- $ date +%s 00:38:17.466 03:37:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721698663 00:38:17.466 03:37:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721698663 00:38:17.466 03:37:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721698663 00:38:17.466 03:37:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721698663 00:38:17.466 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721698663_collect-vmstat.pm.log 00:38:17.466 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721698663_collect-cpu-load.pm.log 00:38:17.466 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721698663_collect-cpu-temp.pm.log 00:38:17.466 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721698663_collect-bmc-pm.bmc.pm.log 00:38:18.399 03:37:44 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:18.399 03:37:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:18.399 03:37:44 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:18.399 03:37:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:18.399 03:37:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:18.399 03:37:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:18.399 03:37:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:18.399 03:37:44 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:18.399 03:37:44 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:18.399 03:37:44 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:18.399 03:37:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:18.399 03:37:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:18.399 03:37:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:18.399 03:37:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:18.399 03:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:18.399 03:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:18.399 03:37:44 -- pm/common@44 -- $ pid=636502 00:38:18.399 03:37:44 -- pm/common@50 -- $ kill -TERM 636502 00:38:18.399 03:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:18.400 03:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:18.400 03:37:44 -- pm/common@44 -- $ pid=636504 00:38:18.400 03:37:44 -- pm/common@50 -- $ kill -TERM 636504 00:38:18.400 03:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:18.400 03:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:18.400 03:37:44 -- pm/common@44 -- $ pid=636506 00:38:18.400 03:37:44 -- pm/common@50 -- $ kill -TERM 636506 00:38:18.400 03:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:18.400 03:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:18.400 03:37:44 -- pm/common@44 -- $ pid=636538 00:38:18.400 03:37:44 -- pm/common@50 -- $ sudo -E kill -TERM 636538 00:38:18.400 + [[ -n 199231 ]] 00:38:18.400 + sudo kill 199231 00:38:18.410 [Pipeline] } 00:38:18.428 [Pipeline] // stage 00:38:18.433 [Pipeline] } 00:38:18.450 [Pipeline] // timeout 00:38:18.455 [Pipeline] } 00:38:18.472 [Pipeline] // catchError 00:38:18.477 [Pipeline] } 00:38:18.494 [Pipeline] // wrap 00:38:18.501 [Pipeline] } 00:38:18.516 [Pipeline] // catchError 00:38:18.525 [Pipeline] stage 00:38:18.527 [Pipeline] { (Epilogue) 00:38:18.541 [Pipeline] catchError 00:38:18.543 [Pipeline] { 00:38:18.557 [Pipeline] echo 00:38:18.559 Cleanup processes 00:38:18.565 [Pipeline] sh 00:38:18.845 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:18.845 636643 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:18.845 636767 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:18.858 [Pipeline] sh 00:38:19.136 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:19.136 ++ grep -v 'sudo pgrep' 00:38:19.136 ++ awk '{print $1}' 00:38:19.136 + sudo kill -9 636643 00:38:19.146 [Pipeline] sh 00:38:19.424 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:29.400 [Pipeline] sh 00:38:29.681 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:29.681 Artifacts sizes are good 00:38:29.695 [Pipeline] archiveArtifacts 00:38:29.702 Archiving artifacts 00:38:29.904 [Pipeline] sh 00:38:30.186 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:30.200 [Pipeline] cleanWs 00:38:30.210 [WS-CLEANUP] Deleting project workspace... 00:38:30.210 [WS-CLEANUP] Deferred wipeout is used... 00:38:30.217 [WS-CLEANUP] done 00:38:30.219 [Pipeline] } 00:38:30.238 [Pipeline] // catchError 00:38:30.249 [Pipeline] sh 00:38:30.535 + logger -p user.info -t JENKINS-CI 00:38:30.545 [Pipeline] } 00:38:30.560 [Pipeline] // stage 00:38:30.565 [Pipeline] } 00:38:30.580 [Pipeline] // node 00:38:30.585 [Pipeline] End of Pipeline 00:38:30.624 Finished: SUCCESS